1
|
Vision matters for shape representation: Evidence from sculpturing and drawing in the blind. Cortex 2024; 174:241-255. [PMID: 38582629 DOI: 10.1016/j.cortex.2024.02.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/23/2024] [Accepted: 02/27/2024] [Indexed: 04/08/2024]
Abstract
Shape is a property that could be perceived by vision and touch, and is classically considered to be supramodal. While there is mounting evidence for the shared cognitive and neural representation space between visual and tactile shape, previous research tended to rely on dissimilarity structures between objects and had not examined the detailed properties of shape representation in the absence of vision. To address this gap, we conducted three explicit object shape knowledge production experiments with congenitally blind and sighted participants, who were asked to produce verbal features, 3D clay models, and 2D drawings of familiar objects with varying levels of tactile exposure, including tools, large nonmanipulable objects, and animals. We found that the absence of visual experience (i.e., in the blind group) led to stronger differences in animals than in tools and large objects, suggesting that direct tactile experience of objects is essential for shape representation when vision is unavailable. For tools with rich tactile/manipulation experiences, the blind produced overall good shapes comparable to the sighted, yet also showed intriguing differences. The blind group had more variations and a systematic bias in the geometric property of tools (making them stubbier than the sighted), indicating that visual experience contributes to aligning internal representations and calibrating overall object configurations, at least for tools. Taken together, the object shape representation reflects the intricate orchestration of vision, touch and language.
Collapse
|
2
|
Distinct but related abilities for visual and haptic object recognition. Psychon Bull Rev 2024:10.3758/s13423-024-02471-x. [PMID: 38381302 DOI: 10.3758/s13423-024-02471-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/29/2024] [Indexed: 02/22/2024]
Abstract
People vary in their ability to recognize objects visually. Individual differences for matching and recognizing objects visually is supported by a domain-general ability capturing common variance across different tasks (e.g., Richler et al., Psychological Review, 126, 226-251, 2019). Behavioral (e.g., Cooke et al., Neuropsychologia, 45, 484-495, 2007) and neural evidence (e.g., Amedi, Cerebral Cortex, 12, 1202-1212, 2002) suggest overlapping mechanisms in the processing of visual and haptic information in the service of object recognition, but it is unclear whether such group-average results generalize to individual differences. Psychometrically validated measures are required, which have been lacking in the haptic modality. We investigate whether object recognition ability is specific to vision or extends to haptics using psychometric measures we have developed. We use multiple visual and haptic tests with different objects and different formats to measure domain-general visual and haptic abilities and to test for relations across them. We measured object recognition abilities using two visual tests and four haptic tests (two each for two kinds of haptic exploration) in 97 participants. Partial correlation and confirmatory factor analyses converge to support the existence of a domain-general haptic object recognition ability that is moderately correlated with domain-general visual object recognition ability. Visual and haptic abilities share about 25% of their variance, supporting the existence of a multisensory domain-general ability while leaving a substantial amount of residual variance for modality-specific abilities. These results extend our understanding of the structure of object recognition abilities; while there are mechanisms that may generalize across categories, tasks, and modalities, there are still other mechanisms that are distinct between modalities.
Collapse
|
3
|
Tactually-related cognitive impairments: sharing of neural substrates across associative tactile agnosia, agraphesthesia, and kinesthetic reading difficulty. Acta Neurol Belg 2023; 123:1893-1902. [PMID: 36336779 DOI: 10.1007/s13760-022-02130-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 10/23/2022] [Indexed: 11/09/2022]
Abstract
INTRODUCTION A precise understanding of the neural substrates underlying tactually-related cognitive impairments such as bilateral tactile agnosia, bilateral agraphesthesia, kinesthetic alexia and kinesthetic reading difficulty is currently incomplete. In particular, recent data have implicated a role for the lateral occipital tactile visual region, or LOtv, in tactile object naming (Amedi et al. Cerebral Cortex 2002). Thus, this study set out to examine the degree to which the LOtv may be involved in tactually-related cognitive impairments by examining two unique cases. METHODS To assess whether LOtv or the visual word form area (VWFA) is involved in tactually-related cognitive impairments, the average activation point of LOtv and that of VWFA were placed on the single-photon emission computed tomography (SPECT) cerebral blood flow images of two patients: one with bilateral associative tactile agnosia, bilateral agraphesthesia, and ineffective kinesthetic reading, and the other with kinesthetic reading difficulty. RESULTS The average LOtv coordinate was involved in the area of hypoperfusion in both patients, whereas that of VWFA was not included in any of the hypoperfused areas. CONCLUSIONS The results support the view that interruption of LOtv or disconnection to LOtv and to VWFA may cause these tactually-related cognitive impairments. Further, bilateral associative tactile agnosia and bilateral agraphesthesia are attributable toward the damage of the occipital lobe, whereas unilateral or predominantly one-sided associative tactile agnosia and agraphesthesia are attributable toward the damage of the parietal lobe.
Collapse
|
4
|
Evidence for an amodal domain-general object recognition ability. Cognition 2023; 238:105542. [PMID: 37419065 DOI: 10.1016/j.cognition.2023.105542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 06/26/2023] [Accepted: 06/27/2023] [Indexed: 07/09/2023]
Abstract
A general object recognition ability predicts performance across a variety of high-level visual tests, categories, and performance in haptic recognition. Does this ability extend to auditory recognition? Vision and haptics tap into similar representations of shape and texture. In contrast, features of auditory perception like pitch, timbre, or loudness do not readily translate into shape percepts related to edges, surfaces, or spatial arrangement of parts. We find that an auditory object recognition ability correlates highly with a visual object recognition ability after controlling for general intelligence, perceptual speed, low-level visual ability, and memory ability. Auditory object recognition was a stronger predictor of visual object recognition than all control measures across two experiments, even though those control variables were also tested visually. These results point towards a single high-level ability used in both vision and audition. Much work highlights how the integration of visual and auditory information is important in specific domains (e.g., speech, music), with evidence for some overlap of visual and auditory neural representations. Our results are the first to reveal a domain-general ability, o, that predicts object recognition performance in both visual and auditory tests. Because o is domain-general, it reveals mechanisms that apply across a wide range of situations, independent of experience and knowledge. As o is distinct from general intelligence, it is well positioned to potentially add predictive validity when explaining individual differences in a variety of tasks, above and beyond measures of common cognitive abilities like general intelligence and working memory.
Collapse
|
5
|
Similar object shape representation encoded in the inferolateral occipitotemporal cortex of sighted and early blind people. PLoS Biol 2023; 21:e3001930. [PMID: 37490508 PMCID: PMC10368275 DOI: 10.1371/journal.pbio.3001930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 06/23/2023] [Indexed: 07/27/2023] Open
Abstract
We can sense an object's shape by vision or touch. Previous studies suggested that the inferolateral occipitotemporal cortex (ILOTC) implements supramodal shape representations as it responds more to seeing or touching objects than shapeless textures. However, such activation in the anterior portion of the ventral visual pathway could be due to the conceptual representation of an object or visual imagery triggered by touching an object. We addressed these possibilities by directly comparing shape and conceptual representations of objects in early blind (who lack visual experience/imagery) and sighted participants. We found that bilateral ILOTC in both groups showed stronger activation during a shape verification task than during a conceptual verification task made on the names of the same manmade objects. Moreover, the distributed activity in the ILOTC encoded shape similarity but not conceptual association among objects. Besides the ILOTC, we also found shape representation in both groups' bilateral ventral premotor cortices and intraparietal sulcus (IPS), a frontoparietal circuit relating to object grasping and haptic processing. In contrast, the conceptual verification task activated both groups' left perisylvian brain network relating to language processing and, interestingly, the cuneus in early blind participants only. The ILOTC had stronger functional connectivity to the frontoparietal circuit than to the left perisylvian network, forming a modular structure specialized in shape representation. Our results conclusively support that the ILOTC selectively implements shape representation independently of visual experience, and this unique functionality likely comes from its privileged connection to the frontoparietal haptic circuit.
Collapse
|
6
|
Engaging in word recognition elicits highly specific modulations in visual cortex. Curr Biol 2023; 33:1308-1320.e5. [PMID: 36889316 PMCID: PMC10089978 DOI: 10.1016/j.cub.2023.02.042] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 01/26/2023] [Accepted: 02/13/2023] [Indexed: 03/09/2023]
Abstract
A person's cognitive state determines how their brain responds to visual stimuli. The most common such effect is a response enhancement when stimuli are task relevant and attended rather than ignored. In this fMRI study, we report a surprising twist on such attention effects in the visual word form area (VWFA), a region that plays a key role in reading. We presented participants with strings of letters and visually similar shapes, which were either relevant for a specific task (lexical decision or gap localization) or ignored (during a fixation dot color task). In the VWFA, the enhancement of responses to attended stimuli occurred only for letter strings, whereas non-letter shapes evoked smaller responses when attended than when ignored. The enhancement of VWFA activity was accompanied by strengthened functional connectivity with higher-level language regions. These task-dependent modulations of response magnitude and functional connectivity were specific to the VWFA and absent in the rest of visual cortex. We suggest that language regions send targeted excitatory feedback into the VWFA only when the observer is trying to read. This feedback enables the discrimination of familiar and nonsense words and is distinct from generic effects of visual attention.
Collapse
|
7
|
Visuo-haptic object perception for robots: an overview. Auton Robots 2023. [DOI: 10.1007/s10514-023-10091-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
Abstract
AbstractThe object perception capabilities of humans are impressive, and this becomes even more evident when trying to develop solutions with a similar proficiency in autonomous robots. While there have been notable advancements in the technologies for artificial vision and touch, the effective integration of these two sensory modalities in robotic applications still needs to be improved, and several open challenges exist. Taking inspiration from how humans combine visual and haptic perception to perceive object properties and drive the execution of manual tasks, this article summarises the current state of the art of visuo-haptic object perception in robots. Firstly, the biological basis of human multimodal object perception is outlined. Then, the latest advances in sensing technologies and data collection strategies for robots are discussed. Next, an overview of the main computational techniques is presented, highlighting the main challenges of multimodal machine learning and presenting a few representative articles in the areas of robotic object recognition, peripersonal space representation and manipulation. Finally, informed by the latest advancements and open challenges, this article outlines promising new research directions.
Collapse
|
8
|
Visual and haptic cues in processing occlusion. Front Psychol 2023; 14:1082557. [PMID: 36968748 PMCID: PMC10036393 DOI: 10.3389/fpsyg.2023.1082557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 02/22/2023] [Indexed: 03/12/2023] Open
Abstract
IntroductionAlthough shape is effective in processing occlusion, ambiguities in segmentation can also be addressed using depth discontinuity given visually and haptically. This study elucidates the contribution of visual and haptic cues to depth discontinuity in processing occlusion.MethodsA virtual reality experiment was conducted with 15 students as participants. Word stimuli were presented on a head-mounted display for recognition. The central part of the words was masked with a virtual ribbon placed at different depths so that the ribbon appeared as an occlusion. The visual depth cue was either present with binocular stereopsis or absent with monocular presentation. The haptic cue was either missing, provided consecutively, or concurrently, by actively tracing a real off-screen bar edge that was positionally aligned with the ribbon in the virtual space. Recognition performance was compared between depth cue conditions.ResultsWe found that word recognition was better with the stereoscopic cue but not with the haptic cue, although both cues contributed to greater confidence in depth estimation. The performance was better when the ribbon was at the farther depth plane to appear as a hollow, rather than when it was at the nearer depth plane to cover the word.DiscussionThe results indicate that occlusion is processed in the human brain by visual input only despite the apparent effectiveness of haptic space perception, reflecting a complex set of natural constraints.
Collapse
|
9
|
Loss of action-related function and connectivity in the blind extrastriate body area. Front Neurosci 2023; 17:973525. [PMID: 36968509 PMCID: PMC10035577 DOI: 10.3389/fnins.2023.973525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 02/23/2023] [Indexed: 03/11/2023] Open
Abstract
The Extrastriate Body Area (EBA) participates in the visual perception and motor actions of body parts. We recently showed that EBA’s perceptual function develops independently of visual experience, responding to stimuli with body-part information in a supramodal fashion. However, it is still unclear if the EBA similarly maintains its action-related function. Here, we used fMRI to study motor-evoked responses and connectivity patterns in the congenitally blind brain. We found that, unlike the case of perception, EBA does not develop an action-related response without visual experience. In addition, we show that congenital blindness alters EBA’s connectivity profile in a counterintuitive way—functional connectivity with sensorimotor cortices dramatically decreases, whereas connectivity with perception-related visual occipital cortices remains high. To the best of our knowledge, we show for the first time that action-related functions and connectivity in the visual cortex could be contingent on visuomotor experience. We further discuss the role of the EBA within the context of visuomotor control and predictive coding theory.
Collapse
|
10
|
Contrary neuronal recalibration in different multisensory cortical areas. eLife 2023; 12:82895. [PMID: 36877555 PMCID: PMC9988259 DOI: 10.7554/elife.82895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 02/21/2023] [Indexed: 03/07/2023] Open
Abstract
The adult brain demonstrates remarkable multisensory plasticity by dynamically recalibrating itself based on information from multiple sensory sources. After a systematic visual-vestibular heading offset is experienced, the unisensory perceptual estimates for subsequently presented stimuli are shifted toward each other (in opposite directions) to reduce the conflict. The neural substrate of this recalibration is unknown. Here, we recorded single-neuron activity from the dorsal medial superior temporal (MSTd), parietoinsular vestibular cortex (PIVC), and ventral intraparietal (VIP) areas in three male rhesus macaques during this visual-vestibular recalibration. Both visual and vestibular neuronal tuning curves in MSTd shifted - each according to their respective cues' perceptual shifts. Tuning of vestibular neurons in PIVC also shifted in the same direction as vestibular perceptual shifts (cells were not robustly tuned to the visual stimuli). By contrast, VIP neurons demonstrated a unique phenomenon: both vestibular and visual tuning shifted in accordance with vestibular perceptual shifts. Such that, visual tuning shifted, surprisingly, contrary to visual perceptual shifts. Therefore, while unsupervised recalibration (to reduce cue conflict) occurs in early multisensory cortices, higher-level VIP reflects only a global shift, in vestibular space.
Collapse
|
11
|
Neural similarities and differences between native and second languages in the bilateral fusiform cortex in Chinese-English bilinguals. Neuropsychologia 2023; 179:108464. [PMID: 36565993 DOI: 10.1016/j.neuropsychologia.2022.108464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 11/20/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
In the field of bilingualism, researchers have proposed an assimilation hypothesis that posits that bilinguals apply the neural network of their native language to process their second language. In Chinese-English bilinguals, the bilateral fusiform gyrus has been identified as the key brain region showing the assimilation process. Specifically, in contrast to left-lateralized activation in the fusiform gyrus in native English speakers, Chinese-English bilinguals recruit the bilateral fusiform cortex to process English words as they do in the processing of Chinese characters. Nevertheless, it is unclear which type of information processing is assimilated in the fusiform gyrus. Using representational similarity analysis (RSA) and psychophysiological interaction (PPI) analysis, this study examined the differences in information representation and functional connectivity between both languages in the fusiform subregions in Chinese-English bilinguals. Univariate analysis revealed that both Chinese and English naming elicited strong activations in the bilateral fusiform gyrus, which confirmed the assimilation process at the activation intensity level. RSA indicated that the neural pattern of English phonological information was assimilated by Chinese in the anterior and middle right fusiform gyrus, while those of orthographic and visual form information were not. Further PPI analysis demonstrated that the neural representation of English phonological information in the right anterior fusiform subregion was related to its interaction with the frontotemporal areas for high-level linguistic processing, while the neural representation of English orthographic information in the right middle fusiform subregion was linked to its interaction with the left inferior occipital cortex for visual processing. These results suggest that, despite the recruitment of similar neural resources in one's native and second languages, the assimilation of information representation is limited in the bilateral fusiform cortex. Our results shed light on the neural mechanisms of second language processing.
Collapse
|
12
|
Testing geometry and 3D perception in children following vision restoring cataract-removal surgery. Front Neurosci 2023; 16:962817. [PMID: 36711132 PMCID: PMC9879291 DOI: 10.3389/fnins.2022.962817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 12/19/2022] [Indexed: 01/13/2023] Open
Abstract
As neuroscience and rehabilitative techniques advance, age-old questions concerning the visual experience of those who gain sight after blindness, once thought to be philosophical alone, take center stage and become the target for scientific inquiries. In this study, we employ a battery of visual perception tasks to study the unique experience of a small group of children who have undergone vision-restoring cataract removal surgery as part of the Himalayan Cataract Project. We tested their abilities to perceive in three dimensions (3D) using a binocular rivalry task and the Brock string task, perceive visual illusions, use cross-modal mappings between touch and vision, and spatially group based on geometric cues. Some of the children in this study gained a sense of sight for the first time in their lives, having been born with bilateral congenital cataracts, while others suffered late-onset blindness in one eye alone. This study simultaneously supports yet raises further questions concerning Hubel and Wiesel's critical periods theory and provides additional insight into Molyneux's problem, the ability to correlate vision with touch quickly. We suggest that our findings present a relatively unexplored intermediate stage of 3D vision development. Importantly, we spotlight some essential geometrical perception visual abilities that strengthen the idea that spontaneous geometry intuitions arise independently from visual experience (and education), thus replicating and extending previous studies. We incorporate a new model, not previously explored, of testing children with congenital cataract removal surgeries who perform the task via vision. In contrast, previous work has explored these abilities in the congenitally blind via touch. Taken together, our findings provide insight into the development of what is commonly known as the visual system in the visually deprived and highlight the need to further empirically explore an amodal, task-based interpretation of specializations in the development and structure of the brain. Moreover, we propose a novel objective method, based on a simple binocular rivalry task and the Brock string task, for determining congenital (early) vs. late blindness where medical history and records are partial or lacking (e.g., as is often the case in cataract removal cases).
Collapse
|
13
|
Functional relevance of the extrastriate body area for visual and haptic object recognition: a preregistered fMRI-guided TMS study. Cereb Cortex Commun 2023; 4:tgad005. [PMID: 37188067 PMCID: PMC10176024 DOI: 10.1093/texcom/tgad005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 03/22/2023] [Accepted: 03/27/2023] [Indexed: 05/17/2023] Open
Abstract
The extrastriate body area (EBA) is a region in the lateral occipito-temporal cortex (LOTC), which is sensitive to perceived body parts. Neuroimaging studies suggested that EBA is related to body and tool processing, regardless of the sensory modalities. However, how essential this region is for visual tool processing and nonvisual object processing remains a matter of controversy. In this preregistered fMRI-guided repetitive transcranial magnetic stimulation (rTMS) study, we examined the causal involvement of EBA in multisensory body and tool recognition. Participants used either vision or haptics to identify 3 object categories: hands, teapots (tools), and cars (control objects). Continuous theta-burst stimulation (cTBS) was applied over left EBA, right EBA, or vertex (control site). Performance for visually perceived hands and teapots (relative to cars) was more strongly disrupted by cTBS over left EBA than over the vertex, whereas no such object-specific effect was observed in haptics. The simulation of the induced electric fields confirmed that the cTBS affected regions including EBA. These results indicate that the LOTC is functionally relevant for visual hand and tool processing, whereas the rTMS over EBA may differently affect object recognition between the 2 sensory modalities.
Collapse
|
14
|
A neurocomputational analysis of visual bias on bimanual tactile spatial perception during a crossmodal exposure. Front Neural Circuits 2022; 16:933455. [PMID: 36439678 PMCID: PMC9684216 DOI: 10.3389/fncir.2022.933455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 10/13/2022] [Indexed: 11/11/2022] Open
Abstract
Vision and touch both support spatial information processing. These sensory systems also exhibit highly specific interactions in spatial perception, which may reflect multisensory representations that are learned through visuo-tactile (VT) experiences. Recently, Wani and colleagues reported that task-irrelevant visual cues bias tactile perception, in a brightness-dependent manner, on a task requiring participants to detect unimanual and bimanual cues. Importantly, tactile performance remained spatially biased after VT exposure, even when no visual cues were presented. These effects on bimanual touch conceivably reflect cross-modal learning, but the neural substrates that are changed by VT experience are unclear. We previously described a neural network capable of simulating VT spatial interactions. Here, we exploited this model to test different hypotheses regarding potential network-level changes that may underlie the VT learning effects. Simulation results indicated that VT learning effects are inconsistent with plasticity restricted to unisensory visual and tactile hand representations. Similarly, VT learning effects were also inconsistent with changes restricted to the strength of inter-hemispheric inhibitory interactions. Instead, we found that both the hand representations and the inter-hemispheric inhibitory interactions need to be plastic to fully recapitulate VT learning effects. Our results imply that crossmodal learning of bimanual spatial perception involves multiple changes distributed over a VT processing cortical network.
Collapse
|
15
|
Neural Correlates of Oral Stereognosis—An fMRI Study. Dysphagia 2022; 38:923-932. [PMID: 36087119 PMCID: PMC10182931 DOI: 10.1007/s00455-022-10517-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 08/26/2022] [Indexed: 11/03/2022]
Abstract
AbstractOral stereognosis is the ability to recognize, discriminate and localize a bolus in the oral cavity. Clinical observation indicates deficits in oral stereognosis in patients with vascular or neurodegenerative diseases particularly affecting the parietal lobes. However, the precise neural representation of oral stereognosis remains unclear whereas the neural network of manual stereognosis has already been identified. We hypothesize that oral and manual stereognosis share common neuronal substrates whilst also showing somatotopic distribution. Functional magnetic resonance images (fMRI; Siemens Prisma 3 T) from 20 healthy right-handed participants (11 female; mean age 25.7 years) using a cross-modal task of oral and manual spatial object manipulation were acquired. Data were analyzed using FSL software using a block design and standard analytical and statistical procedures. A conjunction analysis targeted the common neuronal substrate for stereognosis. Activations associated with manual and oral stereognosis were found in partially overlapping fronto-parietal networks in a somatotopic fashion, where oral stereognosis is located caudally from manual stereognosis. A significant overlap was seen in the left anterior intraparietal sulcus. Additionally, cerebellar activations were shown particularly for the oral condition. Spatial arrangement of shaped boli in the oral cavity is associated with neuronal activity in fronto-parietal networks and the cerebellum. These findings have significant implications for clinical diagnostics and management of patients with lesions or atrophy in parietal lobule (e.g. Alzheimer’s disease, stroke). More studies are required to investigate the clinical effect of damage to these areas, such as loss of oral stereognosis or an impaired oral phase.
Collapse
|
16
|
Impact of blindness onset on the representation of sound categories in occipital and temporal cortices. eLife 2022; 11:79370. [PMID: 36070354 PMCID: PMC9451537 DOI: 10.7554/elife.79370] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 08/15/2022] [Indexed: 11/30/2022] Open
Abstract
The ventral occipito-temporal cortex (VOTC) reliably encodes auditory categories in people born blind using a representational structure partially similar to the one found in vision (Mattioni et al.,2020). Here, using a combination of uni- and multivoxel analyses applied to fMRI data, we extend our previous findings, comprehensively investigating how early and late acquired blindness impact on the cortical regions coding for the deprived and the remaining senses. First, we show enhanced univariate response to sounds in part of the occipital cortex of both blind groups that is concomitant to reduced auditory responses in temporal regions. We then reveal that the representation of the sound categories in the occipital and temporal regions is more similar in blind subjects compared to sighted subjects. What could drive this enhanced similarity? The multivoxel encoding of the ‘human voice’ category that we observed in the temporal cortex of all sighted and blind groups is enhanced in occipital regions in blind groups , suggesting that the representation of vocal information is more similar between the occipital and temporal regions in blind compared to sighted individuals. We additionally show that blindness does not affect the encoding of the acoustic properties of our sounds (e.g. pitch, harmonicity) in occipital and in temporal regions but instead selectively alter the categorical coding of the voice category itself. These results suggest a functionally congruent interplay between the reorganization of occipital and temporal regions following visual deprivation, across the lifespan.
Collapse
|
17
|
Complex Shapes Are Bluish, Darker, and More Saturated; Shape-Color Correspondence in 3D Object Perception. Front Psychol 2022; 13:854574. [PMID: 35602700 PMCID: PMC9114860 DOI: 10.3389/fpsyg.2022.854574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 03/30/2022] [Indexed: 11/25/2022] Open
Abstract
It has been shown that there is a non-random association between shape and color. However, the results of previous studies on the shape-color correspondence did not converge. To address the issue, we focused on shape complexity among a number of shape properties, particularly in terms of 3D shape, and parametrically manipulated the shape complexity and all three components of color. With two experiments, the current study aimed to closely examine the correspondence between shape complexity of 3D shape and color in terms of hue (Experiment 1), luminance, and saturation (Experiment 2). Participants were presented with the 3D shapes in either visual or visuo-haptic modes of exploration. Subsequently, they had to pick from a color palette the color best matching each shape of the object. In Experiment 1, we found that as shapes became more complex, the best associated hue changed from those with long wavelengths to ones with short wavelengths. Results of Experiment 2 demonstrated that as the shapes grew more complex, the associated luminance decreased, and saturation increased. Additionally, adding haptic exploration to visual exploration strengthened the association – for saturation in particular – with the pattern of shape-color correspondence maintained. Taken together, we demonstrated that complex shapes are associated with bluish, darker and more saturated colors, suggesting that shape complexity has a systematic relationship with color including hue, luminance, and saturation.
Collapse
|
18
|
Asymmetric switch cost between subitizing and estimation in tactile modality. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-022-02858-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
19
|
A computational examination of the two-streams hypothesis: which pathway needs a longer memory? Cogn Neurodyn 2022; 16:149-165. [PMID: 35126775 PMCID: PMC8807798 DOI: 10.1007/s11571-021-09703-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 06/26/2021] [Accepted: 07/14/2021] [Indexed: 02/03/2023] Open
Abstract
The two visual streams hypothesis is a robust example of neural functional specialization that has inspired countless studies over the past four decades. According to one prominent version of the theory, the fundamental goal of the dorsal visual pathway is the transformation of retinal information for visually-guided motor behavior. To that end, the dorsal stream processes input using absolute (or veridical) metrics only when the movement is initiated, necessitating very little, or no, memory. Conversely, because the ventral visual pathway does not involve motor behavior (its output does not influence the real world), the ventral stream processes input using relative (or illusory) metrics and can accumulate or integrate sensory evidence over long time constants, which provides a substantial capacity for memory. In this study, we tested these relations between functional specialization, processing metrics, and memory by training identical recurrent neural networks to perform either a viewpoint-invariant object classification task or an orientation/size determination task. The former task relies on relative metrics, benefits from accumulating sensory evidence, and is usually attributed to the ventral stream. The latter task relies on absolute metrics, can be computed accurately in the moment, and is usually attributed to the dorsal stream. To quantify the amount of memory required for each task, we chose two types of neural network models. Using a long-short-term memory (LSTM) recurrent network, we found that viewpoint-invariant object categorization (object task) required a longer memory than orientation/size determination (orientation task). Additionally, to dissect this memory effect, we considered factors that contributed to longer memory in object tasks. First, we used two different sets of objects, one with self-occlusion of features and one without. Second, we defined object classes either strictly by visual feature similarity or (more liberally) by semantic label. The models required greater memory when features were self-occluded and when object classes were defined by visual feature similarity, showing that self-occlusion and visual similarity among object task samples are contributing to having a long memory. The same set of tasks modeled using modified leaky-integrator echo state recurrent networks (LiESN), however, did not replicate the results, except under some conditions. This may be because LiESNs cannot perform fine-grained memory adjustments due to their network-wide memory coefficient and fixed recurrent weights. In sum, the LSTM simulations suggest that longer memory is advantageous for performing viewpoint-invariant object classification (a putative ventral stream function) because it allows for interpolation of features across viewpoints. The results further suggest that orientation/size determination (a putative dorsal stream function) does not benefit from longer memory. These findings are consistent with the two visual streams theory of functional specialization. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s11571-021-09703-z.
Collapse
|
20
|
Altered effective connectivity between lateral occipital cortex and superior parietal lobule contributes to manipulability-related modulation of the Ebbinghaus illusion. Cortex 2022; 147:194-205. [DOI: 10.1016/j.cortex.2021.11.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 08/30/2021] [Accepted: 11/30/2021] [Indexed: 11/03/2022]
|
21
|
Research on tactile perception by skin friction based on a multimodal method. Skin Res Technol 2021; 28:280-290. [PMID: 34935201 PMCID: PMC9907616 DOI: 10.1111/srt.13127] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 10/16/2021] [Indexed: 11/28/2022]
Abstract
BACKGROUND Tactile perception is an essential function of skin. As this research involves many fields, such as skin friction, psychology, and neuroscience, the achievement tactile perception is scattered in various fields with different research methods. Therefore, it is necessary to study the whole tactile loop in a multimodal way, synchronizing all tactile information. MATERIALS AND METHODS To measure information from touch to haptics, we developed a specially designed measuring platform connecting to an electroencephalogram (EEG) recording system. Sandpapers with different roughness were used as samples. First, the surface properties were measured in tribological experiments. Second, psychophysical experiments were conducted to assess the volunteers' cognition of samples' roughness. Third, the mechanical parameters and EEG were measured at the same time during fingertip sliding on samples. Then, the data of all four tactile elements were processed and analyzed separately. The characteristic features were extracted from those data in the time-frequency domain. Furthermore, the correlation coefficient was calculated in the pairwise comparison of each element to evaluate the feasibility of the multimodal method in the study of tactile perception. RESULTS The 600-mesh sandpaper has the largest Ra, Rz, Rsm, and particle size. The normal load, friction force, spectral centroid, and α- and β-wave energy ratios of EEG at chosen electrodes have significant differences and correlations between 3000- and 600-mesh sandpaper in general. CONCLUSION This multimodal method could be used in the study of tactile perception, which is a comprehensive way to observe the whole tactile loop from multiple perspectives.
Collapse
|
22
|
Visual and Tactile Sensory Systems Share Common Features in Object Recognition. eNeuro 2021; 8:ENEURO.0101-21.2021. [PMID: 34544756 PMCID: PMC8493885 DOI: 10.1523/eneuro.0101-21.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 08/24/2021] [Accepted: 08/31/2021] [Indexed: 11/24/2022] Open
Abstract
Although we use our visual and tactile sensory systems interchangeably for object recognition on a daily basis, little is known about the mechanism underlying this ability. This study examined how 3D shape features of objects form two congruent and interchangeable visual and tactile perceptual spaces in healthy male and female participants. Since active exploration plays an important role in shape processing, a virtual reality environment was used to visually explore 3D objects called digital embryos without using the tactile sense. In addition, during the tactile procedure, blindfolded participants actively palpated a 3D-printed version of the same objects with both hands. We first demonstrated that the visual and tactile perceptual spaces were highly similar. We then extracted a series of 3D shape features to investigate how visual and tactile exploration can lead to the correct identification of the relationships between objects. The results indicate that both modalities share the same shape features to form highly similar veridical spaces. This finding suggests that visual and tactile systems might apply similar cognitive processes to sensory inputs that enable humans to rely merely on one modality in the absence of another to recognize surrounding objects.
Collapse
|
23
|
Haptic object recognition based on shape relates to visual object recognition ability. PSYCHOLOGICAL RESEARCH 2021; 86:1262-1273. [PMID: 34355269 PMCID: PMC8341045 DOI: 10.1007/s00426-021-01560-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 07/16/2021] [Indexed: 11/23/2022]
Abstract
Visual object recognition depends in large part on a domain-general ability (Richler et al. Psychol Rev 126(2): 226–251, 2019). Given evidence pointing towards shared mechanisms for object perception across vision and touch, we ask whether individual differences in haptic and visual object recognition are related. We use existing validated visual tests to estimate visual object recognition ability and relate it to performance on two novel tests of haptic object recognition ability (n = 66). One test includes complex objects that participants chose to explore with a hand grasp. The other test uses a simpler stimulus set that participants chose to explore with just their fingertips. Only performance on the haptic test with complex stimuli correlated with visual object recognition ability, suggesting a shared source of variance across task structures, stimuli, and modalities. A follow-up study using a visual version of the haptic test with simple stimuli shows a correlation with the original visual tests, suggesting that the limited complexity of the stimuli did not limit correlation with visual object recognition ability. Instead, we propose that the manner of exploration may be a critical factor in whether a haptic test relates to visual object recognition ability. Our results suggest a perceptual ability that spans at least across vision and touch, however, it may not be recruited during just fingertip exploration.
Collapse
|
24
|
Distinct Functional and Structural Connectivity of the Human Hand-Knob Supported by Intraoperative Findings. J Neurosci 2021; 41:4223-4233. [PMID: 33827936 DOI: 10.1523/jneurosci.1574-20.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2020] [Revised: 01/02/2021] [Accepted: 01/10/2021] [Indexed: 12/15/2022] Open
Abstract
Fine motor skills rely on the control of hand muscles exerted by a region of primary motor cortex (M1) that has been extensively investigated in monkeys. Although neuroimaging enables the exploration of this system also in humans, indirect measurements of brain activity prevent causal definitions of hand motor representations, which can be achieved using data obtained during brain mapping in tumor patients. High-frequency direct electrical stimulation delivered at rest (HF-DES-Rest) on the hand-knob region of the precentral gyrus has identified two sectors showing differences in cortical excitability. Using quantitative analysis of motor output elicited with HF DES-Rest, we characterized two sectors based on their excitability, higher in the posterior and lower in the anterior sector. We studied whether the different cortical excitability of these two regions reflected differences in functional connectivity (FC) and structural connectivity (SC). Using healthy adults from the Human Connectome Project (HCP), we computed FC and SC of the anterior and the posterior hand-knob sectors identified within a large cohort of patients. The comparison of FC of the two seeds showed that the anterior hand-knob, relative to the posterior hand-knob, showed stronger functional connections with a bilateral set of parietofrontal areas responsible for integrating perceptual and cognitive hand-related sensorimotor processes necessary for goal-related actions. This was reflected in different patterns of SC between the two sectors. Our results suggest that the human hand-knob is a functionally and structurally heterogeneous region organized along a motor-cognitive gradient.SIGNIFICANCE STATEMENT The capability to perform complex manipulative tasks is one of the major characteristics of primates and relies on the fine control of hand muscles exerted by a highly specialized region of the precentral gyrus, often termed the "hand-knob" sector. Using intraoperative brain mapping, we identify two hand-knob sectors (posterior and anterior) characterized by differences in cortical excitability. Based on resting-state functional connectivity (FC) and tractography in healthy subjects, we show that posterior and anterior hand-knob sectors differ in their functional connectivity (FC) and structural connectivity (SC) with frontoparietal regions. Thus, anteroposterior differences in cortical excitability are paralleled by differences in FC and SC that likely reflect a motor (posterior) to cognitive (anterior) organization of this cortical region.
Collapse
|
25
|
High-definition transcranial direct current stimulation of the lateral occipital cortex influences figure-ground perception. Neuropsychologia 2021; 155:107792. [PMID: 33610616 DOI: 10.1016/j.neuropsychologia.2021.107792] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 01/11/2021] [Accepted: 02/14/2021] [Indexed: 01/28/2023]
Abstract
Prior work has shown that the lateral occipital cortex (LO) is involved in recognition of objects and their parts, as well as segregation of that object (or "figure") from its background. No studies, though, have examined how LO's functioning is influenced by non-invasive brain stimulation, particularly during a figure-ground perception task. The present study tested whether high-definition transcranial direct current stimulation (HD-tDCS) to right LO influences the effects of familiarity on figure-ground perception. Following 20 min of offline anodal stimulation (or sham), participants viewed masked stimuli consisting of two regions separated by a vertical border and were asked to report which region they perceived as figure. One region was the "critical" region, which either depicted a portion of a familiar object ("Familiar" stimuli), or a familiar object with its parts rearranged into a novel configuration ("Part-rearranged" stimuli). Previous research using these stimuli has found higher reports of the critical region as figure for Familiar vs. Part-rearranged displays, demonstrating the effect of familiarity on figure assignment. The results of the current study showed that HD-tDCS to right LO significantly influenced this typical behavioral pattern. Specifically, stimulation (vs. sham) increased reports of the critical region as figure for Part-rearranged stimuli, bringing perception of these displays up to the level of the Familiar stimuli. We interpret this finding as evidence that stimulation of right LO increased participants' reliance on the familiarity of the parts in their figure-ground judgements-a finding consistent with and extending previous research showing that LO is indeed sensitive to object parts. This is the first study showing that HD-tDCS to LO can influence the effects of familiarity on figure-ground perception.
Collapse
|
26
|
Asymmetric Bálint's syndrome with multimodal agnosia, bilateral agraphesthesia, and ineffective kinesthetic reading due to subcortical hemorrhage in the left parieto-occipito-temporal area. Neurocase 2020; 26:328-339. [PMID: 33103577 DOI: 10.1080/13554794.2020.1831546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
We report a patient with asymmetric Bálint's syndrome (predominantly right-sided oculomotor apraxia and simultanagnosia and optic ataxia for the right hemispace), and multimodal agnosia (apperceptive visual agnosia and bilateral associative tactile agnosia) with accompanying right hemianopia, bilateral agraphesthesia, hemispatial neglect, global alexia with unavailable kinesthetic reading, and lexical agraphia for kanji (Japanese morphograms), after hemorrhage in the left parieto-occipito-temporal area. The coexistence of tactile agnosia, bilateral agraphesthesia, and ineffective kinesthetic reading suggests that tactile-kinesthetic information can be interrupted because of damage to the fiber connection from the parietal lobe to the occipito-temporal area, leading to these tactually related cognitive impairments.
Collapse
|
27
|
Dorsal type letter-by-letter reading accompanying alexia with agraphia due to a lesion of the lateral occipital gyri. Neurocase 2020; 26:285-292. [PMID: 32804589 DOI: 10.1080/13554794.2020.1803922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
We report a patient with alexia with agraphia accompanied by letter-by-letter reading after hemorrhage in the left middle and inferior occipital gyri that spared the angular gyrus and the fusiform gyrus. Kanji (Japanese morphograms) and kana (Japanese phonetic writing) reading and writing tests revealed that alexia with agraphia was characterized by kana-predominant alexia and kanji-predominant agraphia. This type of "dorsal" letter-by-letter reading is discernable from conventional ventral type letter-by-letter reading that is observed in pure alexia in that (1) kinesthetic reading is less effective, (2) kana or literal agraphia coexists, and (3) fundamental visual discrimination is nearly normal.
Collapse
|
28
|
The Nature of Haptic Working Memory Capacity and Its Relation to Visual Working Memory. Multisens Res 2020; 33:837-864. [PMID: 33706264 DOI: 10.1163/22134808-bja10007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Accepted: 03/08/2020] [Indexed: 11/19/2022]
Abstract
I conducted three experiments to investigate haptic working memory capacity using a haptic change detection task with 2D stimuli. I adopted a single-task paradigm comprising haptic single-feature (orientation or texture) and haptic multifeature (orientation and texture) conditions in Experiment 1 and a dual-task paradigm with a primary haptic orientation or texture change detection task and a concurrent secondary visual shape or colour change detection task in Experiments 2-3. I observed that in the single-task paradigm, haptic change detection capacity was higher for single features than it was for multiple features. In haptic working memory, unlike in visual working memory, features of two different dimensions within an object cannot be integrated. In the dual-task paradigm, interference was observed when the concurrent visual shape change detection task was combined with the haptic orientation change detection task although interference was not observed when the concurrent visual colour change detection task was combined with it. In addition, the concurrent visual shape or colour change detection task did not interfere with the capacity for haptic texture memory, which was higher than that for haptic orientation memory. These findings demonstrate that geometric properties perhaps retained a common storage system shared between haptic working memory and visual working memory; however, haptic texture might be retained in an independent stable storage system that is haptic-specific.
Collapse
|
29
|
The role of the anterior intraparietal sulcus and the lateral occipital cortex in fingertip force scaling and weight perception during object lifting. J Neurophysiol 2020; 124:557-573. [PMID: 32667252 PMCID: PMC7500375 DOI: 10.1152/jn.00771.2019] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Skillful object lifting relies on scaling fingertip forces according to the object’s weight. When no visual cues about weight are available, force planning relies on previous lifting experience. Recently, we showed that previously lifted objects also affect weight estimation, as objects are perceived to be lighter when lifted after heavy objects compared with after light ones. Here, we investigated the underlying neural mechanisms mediating these effects. We asked participants to lift objects and estimate their weight. Simultaneously, we applied transcranial magnetic stimulation (TMS) during the dynamic loading or static holding phase. Two subject groups received TMS over either the anterior intraparietal sulcus (aIPS) or the lateral occipital area (LO), known to be important nodes in object grasping and perception. We hypothesized that TMS over aIPS and LO during object lifting would alter force scaling and weight perception. Contrary to our hypothesis, we did not find effects of aIPS or LO stimulation on force planning or weight estimation caused by previous lifting experience. However, we found that TMS over both areas increased grip forces, but only when applied during dynamic loading, and decreased weight estimation, but only when applied during static holding, suggesting time-specific effects. Interestingly, our results also indicate that TMS over LO, but not aIPS, affected load force scaling specifically for heavy objects, which further indicates that load and grip forces might be controlled differently. These findings provide new insights on the interactions between brain networks mediating action and perception during object manipulation. NEW & NOTEWORTHY This article provides new insights into the neural mechanisms underlying object lifting and perception. Using transcranial magnetic stimulation during object lifting, we show that effects of previous experience on force scaling and weight perception are not mediated by the anterior intraparietal sulcus or the lateral occipital cortex (LO). In contrast, we highlight a unique role for LO in load force scaling, suggesting different brain processes for grip and load force scaling in object manipulation.
Collapse
|
30
|
The role of the ventral intraparietal area (VIP/pVIP) in the perception of object-motion and self-motion. Neuroimage 2020; 213:116679. [DOI: 10.1016/j.neuroimage.2020.116679] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 01/15/2020] [Accepted: 02/23/2020] [Indexed: 10/24/2022] Open
|
31
|
Crossmodal reorganisation in deafness: Mechanisms for functional preservation and functional change. Neurosci Biobehav Rev 2020; 113:227-237. [DOI: 10.1016/j.neubiorev.2020.03.019] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Revised: 01/29/2020] [Accepted: 03/16/2020] [Indexed: 11/23/2022]
|
32
|
Abstract
Two experiments evaluated the importance of temporal integration for the perception and discrimination of solid object shape. In Experiment 1, observers anorthoscopically viewed moving or stationary cast shadows of naturally shaped solid objects (bell peppers, Capsicum annuum) through narrow (4-mm wide) slits. At any given moment, observers could only see a very small portion of the overall object shape (generally less than 10%). The results showed that the observers' discrimination performance for the moving cast shadows was much higher than that obtained for the stationary shadows, demonstrating the ability to temporally integrate the piecemeal momentary information about shape that was available through the narrow apertures. In a second experiment, estimates of the strength of the observers' impressions of solid shapes rotating in depth were obtained as well as discrimination accuracies; perceptions of the original moving condition were compared with a new condition where the frames of the apparent motion sequences depicting solid objects in continuous motion (behind the slits) were randomly scrambled. The observers perceived the anorthoscopic displays as depicting solid objects rotating in depth, but only in the continuous motion condition. Interestingly, the discrimination performance in the scrambled condition remained relatively high-observers were still able to integrate information across the multiple scrambled frames in order to produce discrimination performance that was significantly higher than that obtained in the stationary shadow condition. This study was the first to thoroughly evaluate whether and to what extent human observers can effectively discriminate and perceive solid object shape anorthoscopically.
Collapse
|
33
|
Combat exposure, posttraumatic stress disorder, and head injuries differentially relate to alterations in cortical thickness in military Veterans. Neuropsychopharmacology 2020; 45:491-498. [PMID: 31600766 PMCID: PMC6969074 DOI: 10.1038/s41386-019-0539-9] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Revised: 09/23/2019] [Accepted: 10/01/2019] [Indexed: 12/30/2022]
Abstract
Combat-exposed Veterans are at increased risk for developing psychological distress, mood disorders, and trauma and stressor-related disorders. Trauma and mood disorders have been linked to alterations in brain volume, function, and connectivity. However, far less is known about the effects of combat exposure on brain health. The present study examined the relationship between severity of combat exposure and cortical thickness. Post-9/11 Veterans (N = 337; 80% male) were assessed with structural neuroimaging and clinically for combat exposure, depressive symptoms, prior head injury, and posttraumatic stress disorder (PTSD). Vertex-wide cortical thickness was estimated using FreeSurfer autosegmentation. FreeSurfer's Qdec was used to examine relationship between combat exposure, PTSD, and prior head injuries on cortical thickness (Monte Carlo corrected for multiple comparisons, vertex-wise cluster threshold of 1.3, p < 0.01). Covariates included age, sex, education, depressive symptoms, nonmilitary trauma, alcohol use, and prior head injury. Higher combat exposure uniquely related to lower cortical thickness in the left prefrontal lobe and increased cortical thickness in the left middle and inferior temporal lobe; whereas PTSD negatively related to cortical thickness in the right fusiform. Head injuries related to increased cortical thickness in the bilateral medial prefrontal cortex. Combat exposure uniquely contributes to lower cortical thickness in regions implicated in executive functioning, attention, and memory after accounting for the effects of PTSD and prior head injury. Our results highlight the importance of examining effects of stress and trauma exposure on neural health in addition to the circumscribed effects of specific syndromal pathology.
Collapse
|
34
|
Anatomy and white matter connections of the lateral occipital cortex. Surg Radiol Anat 2019; 42:315-328. [DOI: 10.1007/s00276-019-02371-z] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2019] [Accepted: 10/23/2019] [Indexed: 01/26/2023]
|
35
|
Properties of cross-modal occipital responses in early blindness: An ALE meta-analysis. NEUROIMAGE-CLINICAL 2019; 24:102041. [PMID: 31677587 PMCID: PMC6838549 DOI: 10.1016/j.nicl.2019.102041] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Revised: 09/20/2019] [Accepted: 10/17/2019] [Indexed: 11/10/2022]
Abstract
ALE meta-analysis reveals distributed brain networks for object and spatial functions in individuals with early blindness. ALE contrast analysis reveals specific activations in the left cuneus and lingual gyrus for language function, suggesting a reverse hierarchical organization of the visual cortex for early blind individuals. The findings contribute to visual rehabilitation in blind individuals by revealing the function-dependent and sensory-independent networks during nonvisual processing.
Cross-modal occipital responses appear to be essential for nonvisual processing in individuals with early blindness. However, it is not clear whether the recruitment of occipital regions depends on functional domain or sensory modality. The current study utilized a coordinate-based meta-analysis to identify the distinct brain regions involved in the functional domains of object, spatial/motion, and language processing and the common brain regions involved in both auditory and tactile modalities in individuals with early blindness. Following the PRISMA guidelines, a total of 55 studies were included in the meta-analysis. The specific analyses revealed the brain regions that are consistently recruited for each function, such as the dorsal fronto-parietal network for spatial function and ventral occipito-temporal network for object function. This is consistent with the literature, suggesting that the two visual streams are preserved in early blind individuals. The contrast analyses found specific activations in the left cuneus and lingual gyrus for language function. This finding is novel and suggests a reverse hierarchical organization of the visual cortex for early blind individuals. The conjunction analyses found common activations in the right middle temporal gyrus, right precuneus and a left parieto-occipital region. Clinically, this work contributes to visual rehabilitation in early blind individuals by revealing the function-dependent and sensory-independent networks during nonvisual processing.
Collapse
|
36
|
Large-scale temporo–parieto–frontal networks for motor and cognitive motor functions in the primate brain. Cortex 2019; 118:19-37. [DOI: 10.1016/j.cortex.2018.09.024] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2018] [Revised: 09/21/2018] [Accepted: 09/28/2018] [Indexed: 10/28/2022]
|
37
|
Abstract
In the past few years a new scenario for robot-based applications has emerged. Service and mobile robots have opened new market niches. Also, new frameworks for shop-floor robot applications have been developed. In all these contexts, robots are requested to perform tasks within open-ended conditions, possibly dynamically varying. These new requirements ask also for a change of paradigm in the design of robots: on-line and safe feedback motion control becomes the core of modern robot systems. Future robots will learn autonomously, interact safely and possess qualities like self-maintenance. Attaining these features would have been relatively easy if a complete model of the environment was available, and if the robot actuators could execute motion commands perfectly relative to this model. Unfortunately, a complete world model is not available and robots have to plan and execute the tasks in the presence of environmental uncertainties which makes sensing an important component of new generation robots. For this reason, today's new generation robots are equipped with more and more sensing components, and consequently they are ready to actively deal with the high complexity of the real world. Complex sensorimotor tasks such as exploration require coordination between the motor system and the sensory feedback. For robot control purposes, sensory feedback should be adequately organized in terms of relevant features and the associated data representation. In this paper, we propose an overall functional picture linking sensing to action in closed-loop sensorimotor control of robots for touch (hands, fingers). Basic qualities of haptic perception in humans inspire the models and categories comprising the proposed classification. The objective is to provide a reasoned, principled perspective on the connections between different taxonomies used in the Robotics and human haptic literature. The specific case of active exploration is chosen to ground interesting use cases. Two reasons motivate this choice. First, in the literature on haptics, exploration has been treated only to a limited extent compared to grasping and manipulation. Second, exploration involves specific robot behaviors that exploit distributed and heterogeneous sensory data.
Collapse
|
38
|
The neural underpinnings of haptically guided functional grasping of tools: An fMRI study. Neuroimage 2019; 194:149-162. [DOI: 10.1016/j.neuroimage.2019.03.043] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Revised: 01/26/2019] [Accepted: 03/19/2019] [Indexed: 10/27/2022] Open
|
39
|
Abstract
When faced with a novel object, we explore it to understand its shape. This way we combine information coming from different senses, as touch, proprioception and vision, together with the motor information embedded in our motor execution plan. The exploration process provides a structure and constrains this rich flow of inputs, supporting the formation of a unified percept and the memorization of the object features. However, how the exploration strategies are planned is still an open question. In particular, is the exploration strategy used to memorize an object different from the exploration strategy adopted in a recall task? To address this question we used iCube, a sensorized cube which measures its orientation in space and the location of the contacts on its faces. Participants were required to explore the cube faces where little pins were positioned in varying number. Participants had to explore the cube twice and individuate potential differences between the two presentations, which could be performed either haptically alone, or with also vision available. The haptic and visuo-haptic (VH) exploratory strategies changed significantly when finalized to memorize the structure of the object with respect to when the same object was explored to recall and compare it with its memorized instance. These findings indicate that exploratory strategies are adapted not only to the property of the object to be analyzed but also to the prospective use of the resulting representation, be it memorization or recall. The results are discussed in light of the possibility of a systematic modeling of natural VH exploration strategies.
Collapse
|
40
|
Mental Rotation of Digitally-Rendered Haptic Objects. Front Integr Neurosci 2019; 13:7. [PMID: 30930756 PMCID: PMC6427928 DOI: 10.3389/fnint.2019.00007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Accepted: 02/25/2019] [Indexed: 11/13/2022] Open
Abstract
Sensory substitution is an effective means to rehabilitate many visual functions after visual impairment or blindness. Tactile information, for example, is particularly useful for functions such as reading, mental rotation, shape recognition, or exploration of space. Extant haptic technologies typically rely on real physical objects or pneumatically driven renderings and thus provide a limited library of stimuli to users. New developments in digital haptic technologies now make it possible to actively simulate an unprecedented range of tactile sensations. We provide a proof-of-concept for a new type of technology (hereafter haptic tablet) that renders haptic feedback by modulating the friction of a flat screen through ultrasonic vibrations of varying shapes to create the sensation of texture when the screen is actively explored. We reasoned that participants should be able to create mental representations of letters presented in normal and mirror-reversed haptic form without the use of any visual information and to manipulate such representations in a mental rotation task. Healthy sighted, blindfolded volunteers were trained to discriminate between two letters (either L and P, or F and G; counterbalanced across participants) on a haptic tablet. They then tactually explored all four letters in normal or mirror-reversed form at different rotations (0°, 90°, 180°, and 270°) and indicated letter form (i.e., normal or mirror-reversed) by pressing one of two mouse buttons. We observed the typical effect of rotation angle on object discrimination performance (i.e., greater deviation from 0° resulted in worse performance) for trained letters, consistent with mental rotation of these haptically-rendered objects. We likewise observed generally slower and less accurate performance with mirror-reversed compared to prototypically oriented stimuli. Our findings extend existing research in multisensory object recognition by indicating that a new technology simulating active haptic feedback can support the generation and spatial manipulation of mental representations of objects. Thus, such haptic tablets can offer a new avenue to mitigate visual impairments and train skills dependent on mental object-based representations and their spatial manipulation.
Collapse
|
41
|
Representations of microgeometric tactile information during object recognition. Cogn Process 2018; 20:19-30. [PMID: 30446884 DOI: 10.1007/s10339-018-0892-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2017] [Accepted: 11/03/2018] [Indexed: 11/26/2022]
Abstract
Object recognition through tactile perception involves two elements: the shape of the object (macrogeometric properties) and the material of the object (microgeometric properties). Here we sought to determine the characteristics of microgeometric tactile representations regarding object recognition through tactile perception. Participants were directed to recognize objects with different surface materials using either tactile information or visual information. With a quantitative analysis of the cognitive process regarding object recognition, Experiment 1 confirmed the same eight concepts (composed of rules defining distinct cognitive processes) commonly generated in both tactile and visual perceptions to accomplish the task, although an additional concept was generated during the visual task. Experiment 2 focused only on tactile perception. Three tactile objects with different surface materials (plastic, cloth and sandpaper) were used for the object recognition task. The participants answered a questionnaire regarding the process leading to their answers (which was designed based on the results obtained in Experiment 1) and to provide ratings on the vividness, familiarity and affective valence. We used these experimental data to investigate whether changes in material attributes (tactile information) change the characteristics of tactile representation. The observation showed that differences in tactile information resulted in differences in cognitive processes, vividness, familiarity and emotionality. These two experiments collectively indicated that microgeometric tactile information contributes to object recognition by recruiting various cognitive processes including episodic memory and emotion, similar to the case of object recognition by visual information.
Collapse
|
42
|
Abstract
The spatial context in which we view a visual stimulus strongly determines how we perceive the stimulus. In the visual tilt illusion, the perceived orientation of a visual grating is affected by the orientation signals in its surrounding context. Conceivably, the spatial context in which a visual grating is perceived can be defined by interactive multisensory information rather than visual signals alone. Here, we tested the hypothesis that tactile signals engage the neural mechanisms supporting visual contextual modulation. Because tactile signals also convey orientation information and touch can selectively interact with visual orientation perception, we predicted that tactile signals would modulate the visual tilt illusion. We applied a bias-free method to measure the tilt illusion while testing visual-only, tactile-only or visuo-tactile contextual surrounds. We found that a tactile context can influence visual tilt perception. Moreover, combining visual and tactile orientation information in the surround results in a larger tilt illusion relative to the illusion achieved with the visual-only surround. These results demonstrate that the visual tilt illusion is subject to multisensory influences and imply that non-visual signals access the neural circuits whose computations underlie the contextual modulation of vision.
Collapse
|
43
|
Behavioral Strategy Determines Frontal or Posterior Location of Short-Term Memory in Neocortex. Neuron 2018; 99:814-828.e7. [PMID: 30100254 DOI: 10.1016/j.neuron.2018.07.029] [Citation(s) in RCA: 74] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Revised: 05/29/2018] [Accepted: 07/18/2018] [Indexed: 01/11/2023]
Abstract
The location of short-term memory in mammalian neocortex remains elusive. Here we show that distinct neocortical areas maintain short-term memory depending on behavioral strategy. Using wide-field and single-cell calcium imaging, we measured layer 2/3 neuronal activity in mice performing a whisker-based texture discrimination task with delayed response. Mice either deployed an active strategy-engaging their body toward the approaching texture-or passively awaited the touch. Independent of strategy, whisker-related posterior areas encoded choice early after touch. During the delay, in contrast, persistent cortical activity was located medio-frontally in active trials but in a lateral posterior area in passive trials. Perturbing these areas impaired performance for the associated strategy and also provoked strategy switches. Frontally maintained information related to future action, whereas activity in the posterior cortex reflected past stimulus identity. Thus, depending on behavioral strategy, cortical activity is routed differentially to hold information either frontally or posteriorly before converging to similar action.
Collapse
|
44
|
Visual and Motor Recovery After "Cognitive Therapeutic Exercises" in Cortical Blindness: A Case Study. J Neurol Phys Ther 2018. [PMID: 28628550 DOI: 10.1097/npt.0000000000000189] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
BACKGROUND AND PURPOSE Spontaneous visual recovery is rare after cortical blindness. While visual rehabilitation may improve performance, no visual therapy has been widely adopted, as clinical outcomes are variable and rarely translate into improvements in activities of daily living (ADLs). We explored the potential value of a novel rehabilitation approach "cognitive therapeutic exercises" for cortical blindness. CASE DESCRIPTION The subject of this case study was 48-year-old woman with cortical blindness and tetraplegia after cardiac arrest. Prior to the intervention, she was dependent in ADLs and poorly distinguished shapes and colors after 19 months of standard visual and motor rehabilitation. Computed tomographic images soon after symptom onset demonstrated acute infarcts in both occipital cortices. INTERVENTION The subject underwent 8 months of intensive rehabilitation with "cognitive therapeutic exercises" consisting of discrimination exercises correlating sensory and visual information. OUTCOMES Visual fields increased; object recognition improved; it became possible to watch television; voluntary arm movements improved in accuracy and smoothness; walking improved; and ADL independence and self-reliance increased. Subtraction of neuroimaging acquired before and after rehabilitation showed that focal glucose metabolism increases bilaterally in the occipital poles. DISCUSSION This study demonstrates feasibility of "cognitive therapeutic exercises" in an individual with cortical blindness, who experienced impressive visual and sensorimotor recovery, with marked ADL improvement, more than 2 years after ischemic cortical damage.Video Abstract available for additional insights from the authors (see Video, Supplemental Digital Content 1, available at: http://links.lww.com/JNPT/A173).
Collapse
|
45
|
Osteopathic clinical reasoning: An ethnographic study of perceptual diagnostic judgments, and metacognition. INT J OSTEOPATH MED 2018. [DOI: 10.1016/j.ijosm.2018.03.005] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
46
|
Thresholding functional connectomes by means of mixture modeling. Neuroimage 2018; 171:402-414. [PMID: 29309896 PMCID: PMC5981009 DOI: 10.1016/j.neuroimage.2018.01.003] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2017] [Revised: 12/30/2017] [Accepted: 01/02/2018] [Indexed: 12/19/2022] Open
Abstract
Functional connectivity has been shown to be a very promising tool for studying the large-scale functional architecture of the human brain. In network research in fMRI, functional connectivity is considered as a set of pair-wise interactions between the nodes of the network. These interactions are typically operationalized through the full or partial correlation between all pairs of regional time series. Estimating the structure of the latent underlying functional connectome from the set of pair-wise partial correlations remains an open research problem though. Typically, this thresholding problem is approached by proportional thresholding, or by means of parametric or non-parametric permutation testing across a cohort of subjects at each possible connection. As an alternative, we propose a data-driven thresholding approach for network matrices on the basis of mixture modeling. This approach allows for creating subject-specific sparse connectomes by modeling the full set of partial correlations as a mixture of low correlation values associated with weak or unreliable edges in the connectome and a sparse set of reliable connections. Consequently, we propose to use alternative thresholding strategy based on the model fit using pseudo-False Discovery Rates derived on the basis of the empirical null estimated as part of the mixture distribution. We evaluate the method on synthetic benchmark fMRI datasets where the underlying network structure is known, and demonstrate that it gives improved performance with respect to the alternative methods for thresholding connectomes, given the canonical thresholding levels. We also demonstrate that mixture modeling gives highly reproducible results when applied to the functional connectomes of the visual system derived from the n-back Working Memory task in the Human Connectome Project. The sparse connectomes obtained from mixture modeling are further discussed in the light of the previous knowledge of the functional architecture of the visual system in humans. We also demonstrate that with use of our method, we are able to extract similar information on the group level as can be achieved with permutation testing even though these two methods are not equivalent. We demonstrate that with both of these methods, we obtain functional decoupling between the two hemispheres in the higher order areas of the visual cortex during visual stimulation as compared to the resting state, which is in line with previous studies suggesting lateralization in the visual processing. However, as opposed to permutation testing, our approach does not require inference at the cohort level and can be used for creating sparse connectomes at the level of a single subject.
Collapse
|
47
|
Semantic representation in the white matter pathway. PLoS Biol 2018; 16:e2003993. [PMID: 29624578 PMCID: PMC5906027 DOI: 10.1371/journal.pbio.2003993] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2017] [Revised: 04/18/2018] [Accepted: 02/05/2018] [Indexed: 11/29/2022] Open
Abstract
Object conceptual processing has been localized to distributed cortical regions that represent specific attributes. A challenging question is how object semantic space is formed. We tested a novel framework of representing semantic space in the pattern of white matter (WM) connections by extending the representational similarity analysis (RSA) to structural lesion pattern and behavioral data in 80 brain-damaged patients. For each WM connection, a neural representational dissimilarity matrix (RDM) was computed by first building machine-learning models with the voxel-wise WM lesion patterns as features to predict naming performance of a particular item and then computing the correlation between the predicted naming score and the actual naming score of another item in the testing patients. This correlation was used to build the neural RDM based on the assumption that if the connection pattern contains certain aspects of information shared by the naming processes of these two items, models trained with one item should also predict naming accuracy of the other. Correlating the neural RDM with various cognitive RDMs revealed that neural patterns in several WM connections that connect left occipital/middle temporal regions and anterior temporal regions associated with the object semantic space. Such associations were not attributable to modality-specific attributes (shape, manipulation, color, and motion), to peripheral picture-naming processes (picture visual similarity, phonological similarity), to broad semantic categories, or to the properties of the cortical regions that they connected, which tended to represent multiple modality-specific attributes. That is, the semantic space could be represented through WM connection patterns across cortical regions representing modality-specific attributes. One of the most challenging questions in cognitive neuroscience is how semantic knowledge, for example, that “scissors” and “knives” are related in meaning, can emerge from primary sensory dimensions such as visual forms. It is often assumed that in the human brain, semantics are stored in regions of the brain cortex, where distinct types of modality-specific information are transferred to and bind together. We tested an alternative hypothesis—“representation by connection”—in which higher-order semantic information could be coded by means of connection patterns between cortical regions. Combining data from behavior and brain imaging of 80 patients with brain lesions, we applied machine learning to construct the mapping models between the lesion patterns on axonal tracts (white matter) and item-specific object-naming performances. We found that specific white matter lesions produced deficits in object naming associated with the object’s semantic space, but not relevant to its primary dimension. The naming performances of semantically related objects were better predicted from white matter lesion-pattern models. That is, the higher-order semantic space could be coded in patterns of brain connections by linking cortical areas that do not necessarily contain such information.
Collapse
|
48
|
Neuronal Assemblies Evidence Distributed Interactions within a Tactile Discrimination Task in Rats. Front Neural Circuits 2018; 11:114. [PMID: 29375324 PMCID: PMC5768614 DOI: 10.3389/fncir.2017.00114] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2017] [Accepted: 12/26/2017] [Indexed: 11/30/2022] Open
Abstract
Accumulating evidence suggests that neural interactions are distributed and relate to animal behavior, but many open questions remain. The neural assembly hypothesis, formulated by Hebb, states that synchronously active single neurons may transiently organize into functional neural circuits-neuronal assemblies (NAs)-and that would constitute the fundamental unit of information processing in the brain. However, the formation, vanishing, and temporal evolution of NAs are not fully understood. In particular, characterizing NAs in multiple brain regions over the course of behavioral tasks is relevant to assess the highly distributed nature of brain processing. In the context of NA characterization, active tactile discrimination tasks with rats are elucidative because they engage several cortical areas in the processing of information that are otherwise masked in passive or anesthetized scenarios. In this work, we investigate the dynamic formation of NAs within and among four different cortical regions in long-range fronto-parieto-occipital networks (primary somatosensory, primary visual, prefrontal, and posterior parietal cortices), simultaneously recorded from seven rats engaged in an active tactile discrimination task. Our results first confirm that task-related neuronal firing rate dynamics in all four regions is significantly modulated. Notably, a support vector machine decoder reveals that neural populations contain more information about the tactile stimulus than the majority of single neurons alone. Then, over the course of the task, we identify the emergence and vanishing of NAs whose participating neurons are shown to contain more information about animal behavior than randomly chosen neurons. Taken together, our results further support the role of multiple and distributed neurons as the functional unit of information processing in the brain (NA hypothesis) and their link to active animal behavior.
Collapse
|
49
|
Abstract
An exciting possibility for compensating for loss of sensory function is to augment deficient senses by conveying missing information through an intact sense. Here we present an overview of techniques that have been developed for sensory substitution (SS) for the blind, through both touch and audition, with special emphasis on the importance of training for the use of such devices, while highlighting potential pitfalls in their design. One example of a pitfall is how conveying extra information about the environment risks sensory overload. Related to this, the limits of attentional capacity make it important to focus on key information and avoid redundancies. Also, differences in processing characteristics and bandwidth between sensory systems severely constrain the information that can be conveyed. Furthermore, perception is a continuous process and does not involve a snapshot of the environment. Design of sensory substitution devices therefore requires assessment of the nature of spatiotemporal continuity for the different senses. Basic psychophysical and neuroscientific research into representations of the environment and the most effective ways of conveying information should lead to better design of sensory substitution systems. Sensory substitution devices should emphasize usability, and should not interfere with other inter- or intramodal perceptual function. Devices should be task-focused since in many cases it may be impractical to convey too many aspects of the environment. Evidence for multisensory integration in the representation of the environment suggests that researchers should not limit themselves to a single modality in their design. Finally, we recommend active training on devices, especially since it allows for externalization, where proximal sensory stimulation is attributed to a distinct exterior object.
Collapse
|
50
|
Evaluating Integration Strategies for Visuo-Haptic Object Recognition. Cognit Comput 2017; 10:408-425. [PMID: 29881470 PMCID: PMC5971043 DOI: 10.1007/s12559-017-9536-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2017] [Accepted: 12/05/2017] [Indexed: 11/24/2022]
Abstract
In computational systems for visuo-haptic object recognition, vision and haptics are often modeled as separate processes. But this is far from what really happens in the human brain, where cross- as well as multimodal interactions take place between the two sensory modalities. Generally, three main principles can be identified as underlying the processing of the visual and haptic object-related stimuli in the brain: (1) hierarchical processing, (2) the divergence of the processing onto substreams for object shape and material perception, and (3) the experience-driven self-organization of the integratory neural circuits. The question arises whether an object recognition system can benefit in terms of performance from adopting these brain-inspired processing principles for the integration of the visual and haptic inputs. To address this, we compare the integration strategy that incorporates all three principles to the two commonly used integration strategies in the literature. We collected data with a NAO robot enhanced with inexpensive contact microphones as tactile sensors. The results of our experiments involving every-day objects indicate that (1) the contact microphones are a good alternative to capturing tactile information and that (2) organizing the processing of the visual and haptic inputs hierarchically and in two pre-processing streams is helpful performance-wise. Nevertheless, further research is needed to effectively quantify the role of each identified principle by itself as well as in combination with others.
Collapse
|