1
|
Kim M, Schachner A. Sounds of Hidden Agents: The Development of Causal Reasoning About Musical Sounds. Dev Sci 2025; 28:e70021. [PMID: 40313093 PMCID: PMC12046371 DOI: 10.1111/desc.70021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2024] [Revised: 03/10/2025] [Accepted: 04/04/2025] [Indexed: 05/03/2025]
Abstract
Listening to music activates representations of movement and social agents. Why? We test whether causal reasoning plays a role, and find that from childhood, people can intuitively reason about how musical sounds were generated, inferring the events and agents that caused the sounds. In Experiment 1 (N = 120, pre-registered), 6-year-old children and adults inferred the presence of an unobserved animate agent from hearing musical sounds, by integrating information from the sounds' timing with knowledge of the visual context. Thus, children inferred that an agent was present when the sounds would require self-propelled movement to produce, given the current visual context (e.g., unevenly-timed notes, from evenly-spaced xylophone bars). Consistent with Bayesian causal inference, this reasoning was flexible, allowing people to make inferences not only about unobserved agents, but also the structure of the visual environment in which sounds were produced (in Experiment 2, N = 114). Across experiments, we found evidence of developmental change: Younger children ages 4-5 years failed to integrate auditory and visual information, focusing solely on auditory features (Experiment 1) and failing to connect sounds to visual contexts that produced them (Experiment 2). Our findings support a developmental account in which before age 6, children's reasoning about the causes of musical sounds is limited by failure to integrate information from multiple modalities when engaging in causal reasoning. By age 6, children and adults integrate auditory information with other knowledge to reason about how musical sounds were generated, and thereby link musical sounds with the agents, contexts, and events that caused them.
Collapse
Affiliation(s)
- Minju Kim
- Department of PsychologyUniversity of CaliforniaSan DiegoCaliforniaUSA
- Teaching and Learning CommonsUniversity of CaliforniaSan DiegoCaliforniaUSA
| | - Adena Schachner
- Department of PsychologyUniversity of CaliforniaSan DiegoCaliforniaUSA
| |
Collapse
|
2
|
Valenza A, Blount H, Ward J, Merrick C, Wootten R, Dearden J, Wildgoose C, Bianco A, Buoite-Stella A, Filingeri VL, Worsley PR, Filingeri D. Skin wetness perception across body sites in children and adolescents aged 7-16 years old. Exp Physiol 2025. [PMID: 40448663 DOI: 10.1113/ep092691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2025] [Accepted: 05/12/2025] [Indexed: 06/02/2025]
Abstract
Human skin wetness perception relies on the multisensory integration of thermal and mechanical cues during contact with moisture. Yet, it is unknown whether children and adolescents perceive skin wetness similarly to younger and older adults. We investigated skin wetness perceptions across the forehead, neck, forearm, and foot dorsum in 12 children/adolescents (4F/8M; 12 ± 3 years), 41 younger (21F/20M; 25 ± 3 years), and 21 older adults (11F/10M; 56 ± 6 years), during two established quantitative sensory tests. Our results indicated that, given the same moisture content (0.8 mL of water), very cold-wet stimuli applied to the forearm were perceived by all groups as wetter than neutral-wet (mean difference: 35.5 mm on a 100-mm visual analogue scale for wetness [95% CI: 22.3, 38.7]; P < 0.0001; ∼35% difference) and very hot-wet stimuli (mean difference: 22.7 mm [95% CI: 14.5, 40.9]; P < 0.0001; ∼23% difference). Children/adolescents also reported greater wetness perceptions than older adults during cold-wet stimulation of the forehead, neck and foot dorsum (mean difference: 20.6 mm; 95% CI: 1.5, 39.7; P = 0.031; ∼21% difference). In all age groups, the foot dorsum presented higher cold-wet sensitivity (mean difference: 11.1mm [95%CI 2.2, 20.0] p = 0.010; ~11% difference) and lower warm-wet sensitivity than the neck (mean difference: 12.9mm [95%CI 2.8, 23.0] p = 0.008; ~13% difference). We conclude that wetness perceptions in children/adolescents (age range: 7-16 years) are similar to those of adults in that both present (1) a characteristic U-shaped relationship between stimulus temperature and perceived wetness magnitude and (2) similar body regional patterns. These findings provide novel evidence on age-dependent variations in wetness perception which could inform user-centred innovation in thermal protection and garment design.
Collapse
Affiliation(s)
- Alessandro Valenza
- ThermosenseLab, Skin Sensing Research Group, School of Health Sciences, The University of Southampton, Southampton, UK
- Sport and Exercise Sciences Research Unit, SPPEFF Department, University of Palermo, Palermo, Italy
| | - Hannah Blount
- ThermosenseLab, Skin Sensing Research Group, School of Health Sciences, The University of Southampton, Southampton, UK
| | - Jade Ward
- ThermosenseLab, Skin Sensing Research Group, School of Health Sciences, The University of Southampton, Southampton, UK
| | - Charlotte Merrick
- ThermosenseLab, Skin Sensing Research Group, School of Health Sciences, The University of Southampton, Southampton, UK
| | - Riley Wootten
- ThermosenseLab, Skin Sensing Research Group, School of Health Sciences, The University of Southampton, Southampton, UK
| | - Jasmin Dearden
- ThermosenseLab, Skin Sensing Research Group, School of Health Sciences, The University of Southampton, Southampton, UK
| | - Charlotte Wildgoose
- ThermosenseLab, Skin Sensing Research Group, School of Health Sciences, The University of Southampton, Southampton, UK
| | - Antonino Bianco
- Sport and Exercise Sciences Research Unit, SPPEFF Department, University of Palermo, Palermo, Italy
| | - Alex Buoite-Stella
- Clinical Unit of Neurology, Department of Medicine, Surgery and Health Sciences, Trieste University Hospital-ASUGI University of Trieste, Trieste, Italy
| | - Victoria L Filingeri
- Psychological and Behavioural Sciences, School of Psychology, University of Derby, Derby, UK
| | - Peter R Worsley
- Pressurelab, Skin Sensing Research Group, School of Health Sciences, The University of Southampton, Southampton, UK
| | - Davide Filingeri
- ThermosenseLab, Skin Sensing Research Group, School of Health Sciences, The University of Southampton, Southampton, UK
| |
Collapse
|
3
|
Scheller M, Proulx MJ, de Haan M, Dahlmann-Noor A, Petrini K. Visual experience affects neural correlates of audio-haptic integration: A case study of non-sighted individuals. PROGRESS IN BRAIN RESEARCH 2025; 292:25-70. [PMID: 40409923 DOI: 10.1016/bs.pbr.2025.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2025]
Abstract
The ability to reduce sensory uncertainty by integrating information across different senses develops late in humans and depends on cross-modal, sensory experience during childhood and adolescence. While the dependence of audio-haptic integration on vision suggests cross-modal neural reorganization, evidence for such changes is lacking. Furthermore, little is known about the neural processes underlying audio-haptic integration even in sighted adults. Here, we examined electrophysiological correlates of audio-haptic integration in sighted adults (n = 29), non-sighted adults (n = 7), and sighted adolescents (n = 12) using a data-driven electrical neuroimaging approach. In sighted adults, optimal integration performance was predicted by topographical and super-additive strength modulations around 205-285 ms. Data from four individuals who went blind before the age of 8-9 years suggests that they achieved optimal integration via different, sub-additive mechanisms at earlier processing stages. Sighted adolescents showed no robust multisensory modulations. Late-blind adults, who did not show behavioral benefits of integration, demonstrated modulations at early latencies. Our findings suggest a critical period for the development of optimal audio-haptic integration dependent on visual experience around the late childhood and early adolescence.
Collapse
Affiliation(s)
- Meike Scheller
- Department of Psychology, University of Bath, Bath, United Kingdom; Department of Psychology, Durham University, Durham, United Kingdom.
| | - Michael J Proulx
- Department of Psychology, University of Bath, Bath, United Kingdom; The Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA), Bath, United Kingdom; Bath Institute for the Augmented Human (IAH), Bath, United Kingdom
| | - Michelle de Haan
- Developmental Neurosciences Programme, University College London, London, United Kingdom
| | - Annegret Dahlmann-Noor
- NIHR Moorfields Biomedical Research Centre, London, United Kingdom; Paediatric Service, Moorfields Eye Hospital, London, United Kingdom
| | - Karin Petrini
- Department of Psychology, University of Bath, Bath, United Kingdom; The Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA), Bath, United Kingdom; Bath Institute for the Augmented Human (IAH), Bath, United Kingdom
| |
Collapse
|
4
|
Cooney SM, Holmes CA, Cappagli G, Cocchi E, Gori M, Newell FN. Susceptibility to spatial illusions does not depend on visual experience: Evidence from sighted and blind children. Q J Exp Psychol (Hove) 2025:17470218251336082. [PMID: 40205750 DOI: 10.1177/17470218251336082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
Abstract
Visuospatial illusions may be a by-product of learned regularities in the environment or they may reflect the recruitment of sensory mechanisms that, in some contexts, provide an erroneous spatial estimate. Young children experience visual illusions, and blind adults are susceptible using touch alone, suggesting that the perceptual inferences influencing illusions are amodal and rapidly acquired. However, other evidence, such as visual illusions in the newly sighted, points to the involvement of innate mechanisms. To help tease apart cognitive from sensory influences, we investigated susceptibility to the Ebbinghaus, Müller-Lyer and Vertical-Horizontal illusions in children aged 6-14 years following visual-only, haptic-only and bimodal exploration. Consistent with previous findings, children of all ages were susceptible to all three visual illusions. In addition, illusions of extent but not of size were experienced using haptics alone. We then tested 17 congenitally blind children to investigate whether illusions were mediated by vision. Similar to their sighted counterparts, blind children were also susceptible to illusions following haptic exploration suggesting that early visual experience is not necessary for spatial illusions to be perceived. Reduced susceptibility in older children to some illusions further implies that explicit or formal knowledge of spatial relations is unlikely to mediate these experiences. Instead, the results are consistent with previous evidence for cross-modal interactions in 'visual' brain regions and point to the possibility that illusions may be driven by innate developmental processes that are not entirely dependent on, although are refined by, visual experience.
Collapse
Affiliation(s)
- Sarah M Cooney
- Institute of Neuroscience and School of Psychology, Trinity College Dublin, Ireland
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Corinne A Holmes
- Institute of Neuroscience and School of Psychology, Trinity College Dublin, Ireland
| | - Giulia Cappagli
- U-VIP: Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Elena Cocchi
- Istituto David Chiossone per Ciechi ed Ipovedenti ONLUS, Genova, Italy
| | - Monica Gori
- U-VIP: Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Fiona N Newell
- Institute of Neuroscience and School of Psychology, Trinity College Dublin, Ireland
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, UAE
| |
Collapse
|
5
|
Lam YX, Loh SJJ, Chan JYK, Lee NKL, Chong SL, Tan RMR, Bin Zainuddin MA, Mahadev A, Wong KPL. Glass injuries seen in a paediatric tertiary hospital in Singapore: An epidemiology study. Injury 2025; 56:112225. [PMID: 40037263 DOI: 10.1016/j.injury.2025.112225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Revised: 01/08/2025] [Accepted: 02/18/2025] [Indexed: 03/06/2025]
Abstract
Lacerations rank as the most common paediatric injury that requires a physician evaluation. Glass is a frequent cause of such lacerations, however there is currently little to no information on this. Hence, this paper aims to describe the burden and characteristics of such injuries in Singapore. This study is a retrospective review of glass-related trauma presented to paediatric hospital KKH Emergency Department between 1st January 2017 and 4th July 2023. Data on patient and injury characteristics, as well as treatment plans were collected. 680 patients up to 18 years old (average 6.93) were included in the study. 420 (62 %) were male. The number of glass-related injuries were stable at about 100 per year from 2017 to 2023. 649 (95 %) cases were unintentional. 528 (78 %) injuries occurred indoors.159 (23 %) children had adult supervision at time of injury. A majority of 458 (67 %) injuries occurred during the weekday. Primary blunt injuries were the highest at 414 (61 %), followed by 230 (34 %) penetrating injuries. 317 (37 %) injuries occurred at the lower limb, 305 (36 %) at the upper limb, and 105 (12 %) at the face. 596 (87.6 %) patients had "None to mild" injuries, 31 (4.6 %) with "Moderate" injuries, and 53 (7.8 %) with "Severe" injuries. Glass doors led to 315 (46 %) cases, with glass shards and glass panels causing 85 (12.5 %) and 84 (12.5 %) cases respectively. 555 (82 %) of patients received definitive treatment in the Emergency Department and 74 (11 %) required surgery. The average duration of hospitalization of all patients is 0.36 days. 430 patients averaged 3.66 weeks of follow-up, while 247 were discharged immediately. 85 (13 %) patients required inpatient care. Only 1 patient required fluid resuscitation in the Emergency Department. Most glass injuries are unintentional, caused by glass doors, occur indoors and are, fortunately, mild cases.
Collapse
Affiliation(s)
- Yun Xiu Lam
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore.
| | - Spencer Jia Jie Loh
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | | | | | | | | | | | | | | |
Collapse
|
6
|
Casado-Palacios M, Tonelli A, Campus C, Gori M. Cross-Modal Interactions and Movement-Related Tactile Gating: The Role of Vision. Brain Sci 2025; 15:288. [PMID: 40149809 PMCID: PMC11939845 DOI: 10.3390/brainsci15030288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2025] [Revised: 02/26/2025] [Accepted: 03/04/2025] [Indexed: 03/29/2025] Open
Abstract
BACKGROUND When engaging with the environment, multisensory cues interact and are integrated to create a coherent representation of the world around us, a process that has been suggested to be affected by the lack of visual feedback in blind individuals. In addition, the presence of voluntary movement can be responsible for suppressing somatosensory information processed by the cortex, which might lead to a worse encoding of tactile information. OBJECTIVES In this work, we aim to explore how cross-modal interaction can be affected by active movements and the role of vision in this process. METHODS To this end, we measured the precision of 18 blind individuals and 18 age-matched sighted controls in a velocity discrimination task. The participants were instructed to detect the faster stimulus between a sequence of two in both passive and active touch conditions. The sensory stimulation could be either just tactile or audio-tactile, where a non-informative sound co-occurred with the tactile stimulation. The measure of precision was obtained by computing the just noticeable difference (JND) of each participant. RESULTS The results show worse precision with the audio-tactile sensory stimulation in the active condition for the sighted group (p = 0.046) but not for the blind one (p = 0.513). For blind participants, only the movement itself had an effect. CONCLUSIONS For sighted individuals, the presence of noise from active touch made them vulnerable to auditory interference. However, the blind group exhibited less sensory interaction, experiencing only the detrimental effect of movement. Our work should be considered when developing next-generation haptic devices.
Collapse
Affiliation(s)
- Maria Casado-Palacios
- DIBRIS—Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, 16145 Genoa, Italy
- UVIP—Unit for Visually Impaired People, Italian Institute of Technology, 16152 Genoa, Italy; (A.T.); (C.C.); (M.G.)
| | - Alessia Tonelli
- UVIP—Unit for Visually Impaired People, Italian Institute of Technology, 16152 Genoa, Italy; (A.T.); (C.C.); (M.G.)
| | - Claudio Campus
- UVIP—Unit for Visually Impaired People, Italian Institute of Technology, 16152 Genoa, Italy; (A.T.); (C.C.); (M.G.)
| | - Monica Gori
- UVIP—Unit for Visually Impaired People, Italian Institute of Technology, 16152 Genoa, Italy; (A.T.); (C.C.); (M.G.)
| |
Collapse
|
7
|
Kong Y, Yuan X, Hu Y, Li B, Li D, Guo J, Sun M, Song Y. Development of the relationship between visual selective attention and auditory change detection. Neuroimage 2025; 306:121020. [PMID: 39800173 DOI: 10.1016/j.neuroimage.2025.121020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Revised: 12/16/2024] [Accepted: 01/09/2025] [Indexed: 01/15/2025] Open
Abstract
Understanding the developmental trajectories of the auditory and visual systems is crucial to elucidate cognitive maturation and its associated relationships, which are essential for effectively navigating dynamic environments. Our one recent study has shown a positive correlation between the event-related potential (ERP) amplitudes associated with visual selective attention (posterior contralateral N2) and auditory change detection (mismatch negativity) in adults, suggesting an intimate relationship and potential shared mechanism between visual selective attention and auditory change detection. However, the evolution of these processes and their relationship over time remains unclear. In this study, we recorded electroencephalography signals from 118 participants (42 adults and 76 typically developing children) during separate visual localization and auditory-embedded fixation tasks. Further, we employed both ERP analysis and multivariate pattern machine learning to investigate developmental patterns. ERP amplitude and decoding accuracy provided convergent evidence underlying a linear developmental trajectory for visual selective attention and an inverted U-shaped trajectory for auditory change detection from childhood to adulthood. Importantly, our findings confirmed the established association of an N2 pc-MMN in adults using a larger sample size, and further identified a positive correlation between decoding accuracy for visual target location and decoding accuracy for auditory stimulus type specifically in adults. However, both visual-auditory correlation effects were absent in children. Our study provides neurophysiological insights into the distinct developmental trajectories of visual selective attention and auditory change detection. It highlights that the close relationship between individual differences in the two processes emerges alongside their respective maturation and does not become evident until adulthood.
Collapse
Affiliation(s)
- Yuanjun Kong
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, PR China
| | - Xuye Yuan
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, PR China
| | - Yiqing Hu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, PR China
| | - Bingkun Li
- National Key Laboratory of Human Factors Engineering, China Astronaut Research and Training Center, Beijing 100094, PR China
| | - Dongwei Li
- Department of Applied Psychology, Faculty of Arts and Sciences, Beijing Normal University at Zhuhai, Zhuhai 519087, PR China; Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education, Faculty of Psychology, Beijing Normal University, Beijing 100875, PR China
| | - Jialiang Guo
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, PR China
| | - Meirong Sun
- School of Psychology, Beijing Sport University, Beijing 100084, PR China
| | - Yan Song
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, PR China.
| |
Collapse
|
8
|
Reiss LAJ, Johnson AJ, Eddolls MS, Hartling CL, Fowler JR, Stark GN, Glickman B, Sanders H, Oh Y. Binaural Fusion Sharpens on a Scale of Octaves During Pre-adolescence in Children with Normal Hearing, Hearing Aids, and Bimodal Cochlear Implants, but not Bilateral Cochlear Implants. J Assoc Res Otolaryngol 2025; 26:93-109. [PMID: 39915430 PMCID: PMC11861472 DOI: 10.1007/s10162-025-00975-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Accepted: 01/08/2025] [Indexed: 02/27/2025] Open
Abstract
PURPOSE The breadth of binaural pitch fusion, the integration of sounds differing in frequency across the two ears, can limit the ability to segregate and understand speech in background noise. Binaural pitch fusion is one type of central auditory processing that may still be developing in the pre-adolescent age range. In addition, children with hearing loss potentially have different trajectories of development of central auditory processing compared to their normal-hearing (NH) peers, due to disruption of auditory input and/or abnormal stimulation from hearing devices. The goal of this study was to measure and compare binaural pitch fusion changes during development in children with NH versus hearing loss and different hearing device combinations. Interaural pitch discrimination abilities were also measured to control for pitch discrimination as a potential limiting factor for fusion that may also change during development. METHODS Baseline measurements of binaural pitch fusion and interaural pitch discrimination were conducted in a total of 62 (22 female) children with NH (n = 25), bilateral hearing aids (HA; n = 10, bimodal cochlear implants (CI; n = 9), and bilateral CIs (n = 18), with longitudinal follow-up for a subset of participants (18 NH, 9 HA, 8 bimodal CI, and 15 bilateral CI). Age at the start of testing ranged from 6 to 10 years old, with a goal of repeated measurements over 3-6 years. Binaural pitch fusion ranges were measured as the range of acoustic frequencies (electrodes) presented to one ear that was perceptually fused with a single reference frequency (electrode) presented simultaneously to the other ear. Similarly, interaural pitch discrimination was measured as the range of frequencies (electrodes) that could not be consistently ranked in pitch compared to a single reference frequency (electrode) under sequential presentation to opposite ears. RESULTS Children with NH and HAs initially had broad binaural pitch fusion ranges compared to adults. With increasing age, the binaural fusion range narrowed by 1-3 octaves for children with NH, bilateral HAs, and bimodal CIs, but not for children with bilateral CIs. Interaural pitch discrimination showed no changes with age, though differences in discrimination ability were seen across groups. CONCLUSION Binaural fusion sharpens significantly on the scale of octaves in the age range from 6 to 14 years. The lack of change in interaural pitch discrimination with increasing age rules out discrimination changes as an explanation for the binaural fusion range changes. The differences in the trajectory of binaural fusion changes across groups indicate the importance of hearing device combination for the development of binaural processing abilities in children with hearing loss, with implications for addressing challenges with speech perception in noise. Together, the results suggest that pruning of binaural connections is still occurring and likely guided by hearing experience during childhood development.
Collapse
Affiliation(s)
- Lina A J Reiss
- Department of Otolaryngology, Oregon Hearing Research Center, Oregon Health and Science University, 3181 SW Sam Jackson Park Road, Portland, OR, 97239, USA.
| | - Alicia J Johnson
- Biostatistics and Design Program, Oregon Health and Science University, 3181 SW Sam Jackson Park Road, Portland, OR, 97239, USA
| | - Morgan S Eddolls
- Department of Otolaryngology, Oregon Hearing Research Center, Oregon Health and Science University, 3181 SW Sam Jackson Park Road, Portland, OR, 97239, USA
| | - Curtis L Hartling
- Department of Otolaryngology, Oregon Hearing Research Center, Oregon Health and Science University, 3181 SW Sam Jackson Park Road, Portland, OR, 97239, USA
| | - Jennifer R Fowler
- Department of Otolaryngology, Oregon Hearing Research Center, Oregon Health and Science University, 3181 SW Sam Jackson Park Road, Portland, OR, 97239, USA
| | - Gemaine N Stark
- Department of Otolaryngology, Oregon Hearing Research Center, Oregon Health and Science University, 3181 SW Sam Jackson Park Road, Portland, OR, 97239, USA
| | - Bess Glickman
- Department of Otolaryngology, Oregon Hearing Research Center, Oregon Health and Science University, 3181 SW Sam Jackson Park Road, Portland, OR, 97239, USA
| | - Holden Sanders
- Department of Otolaryngology, Oregon Hearing Research Center, Oregon Health and Science University, 3181 SW Sam Jackson Park Road, Portland, OR, 97239, USA
| | - Yonghee Oh
- Department of Otolaryngology, Oregon Hearing Research Center, Oregon Health and Science University, 3181 SW Sam Jackson Park Road, Portland, OR, 97239, USA
| |
Collapse
|
9
|
Szubielska M, Wojtasiński M, Pasternak M, Pasternak K, Augustynowicz P, Picard D. Investigating canonical size phenomenon in drawing from memory task in different perceptual conditions among children. Sci Rep 2025; 15:2512. [PMID: 39833272 PMCID: PMC11747402 DOI: 10.1038/s41598-025-86923-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Accepted: 01/15/2025] [Indexed: 01/22/2025] Open
Abstract
The canonical size phenomenon refers to the mental representation of real-object size information: the objects larger in the physical world are represented as larger in mental spatial representations. This study tested this phenomenon in a drawing-from-memory task among children aged 5, 7, and 9 years. The participants were asked to draw objects whose actual sizes varied at eight size rank levels. Drawings were made on regular paper sheets or special foils to produce embossed drawings. When drawing from memory, the participants were either sighted or blindfolded (to prevent visual feedback). We predicted that the drawn size of objects would increase with increasing size rank of objects. The findings supported the hypothesis concerning the canonical size effect among all age groups tested. This means that children aged 5 to 9 represent real-world size information about everyday objects and are sensitive to their size subtleties. Moreover, the drawn size increased with increasing size ranks both within sighted and blindfolded perceptual conditions (however, the slope of functions that best explain the relation between size rank and drawn size varied between the perceptual conditions). This finding further supports the recent evidence of the spatial character of the canonical size phenomenon.
Collapse
Affiliation(s)
- Magdalena Szubielska
- Institute of Psychology, The John Paul II Catholic University of Lublin, Lublin, Poland.
| | - Marcin Wojtasiński
- Institute of Psychology, The John Paul II Catholic University of Lublin, Lublin, Poland
| | - Monika Pasternak
- Institute of Psychology, The John Paul II Catholic University of Lublin, Lublin, Poland
| | - Katarzyna Pasternak
- Institute of Psychology, The John Paul II Catholic University of Lublin, Lublin, Poland
| | - Paweł Augustynowicz
- Institute of Psychology, The John Paul II Catholic University of Lublin, Lublin, Poland
| | | |
Collapse
|
10
|
Ganea N, Addyman C, Yang J, Bremner A. Effects of multisensory stimulation on infants' learning of object pattern and trajectory. Child Dev 2024; 95:2133-2149. [PMID: 39105480 DOI: 10.1111/cdev.14147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/07/2024]
Abstract
This study investigated whether infants encode better the features of a briefly occluded object if its movements are specified simultaneously by vision and audition than if they are not (data collected: 2017-2019). Experiment 1 showed that 10-month-old infants (N = 39, 22 females, White-English) notice changes in the visual pattern on the object irrespective of the stimulation received (spatiotemporally congruent audio-visual stimulation, incongruent stimulation, or visual-only;η p 2 = .53). Experiment 2 (N = 72, 36 female) found similar results in 6-month-olds (Test Block 1,η p 2 = .13), but not 4-month-olds. Experiment 3 replicated this finding with another group of 6-month-olds (N = 42, 21 females) and showed that congruent stimulation enables infants to detect changes in object trajectory (d = 0.56) in addition to object pattern (d = 1.15), whereas incongruent stimulation hinders performance.
Collapse
Affiliation(s)
- Nataşa Ganea
- Department of Psychology, Goldsmiths, University of London, London, UK
| | - Caspar Addyman
- Department of Psychology, Goldsmiths, University of London, London, UK
| | - Jiale Yang
- School of Psychology, Chukyo University, Nagoya, Japan
| | - Andrew Bremner
- Centre for Developmental Science, School of Psychology, University of Birmingham, Birmingham, UK
| |
Collapse
|
11
|
Chow JK, Palmeri TJ, Gauthier I. Distinct but related abilities for visual and haptic object recognition. Psychon Bull Rev 2024; 31:2148-2159. [PMID: 38381302 DOI: 10.3758/s13423-024-02471-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/29/2024] [Indexed: 02/22/2024]
Abstract
People vary in their ability to recognize objects visually. Individual differences for matching and recognizing objects visually is supported by a domain-general ability capturing common variance across different tasks (e.g., Richler et al., Psychological Review, 126, 226-251, 2019). Behavioral (e.g., Cooke et al., Neuropsychologia, 45, 484-495, 2007) and neural evidence (e.g., Amedi, Cerebral Cortex, 12, 1202-1212, 2002) suggest overlapping mechanisms in the processing of visual and haptic information in the service of object recognition, but it is unclear whether such group-average results generalize to individual differences. Psychometrically validated measures are required, which have been lacking in the haptic modality. We investigate whether object recognition ability is specific to vision or extends to haptics using psychometric measures we have developed. We use multiple visual and haptic tests with different objects and different formats to measure domain-general visual and haptic abilities and to test for relations across them. We measured object recognition abilities using two visual tests and four haptic tests (two each for two kinds of haptic exploration) in 97 participants. Partial correlation and confirmatory factor analyses converge to support the existence of a domain-general haptic object recognition ability that is moderately correlated with domain-general visual object recognition ability. Visual and haptic abilities share about 25% of their variance, supporting the existence of a multisensory domain-general ability while leaving a substantial amount of residual variance for modality-specific abilities. These results extend our understanding of the structure of object recognition abilities; while there are mechanisms that may generalize across categories, tasks, and modalities, there are still other mechanisms that are distinct between modalities.
Collapse
Affiliation(s)
- Jason K Chow
- Department of Psychology, Vanderbilt University, 111 21st Avenue South, Nashville, TN, 37240, USA.
| | - Thomas J Palmeri
- Department of Psychology, Vanderbilt University, 111 21st Avenue South, Nashville, TN, 37240, USA
| | - Isabel Gauthier
- Department of Psychology, Vanderbilt University, 111 21st Avenue South, Nashville, TN, 37240, USA
| |
Collapse
|
12
|
Reiss LAJ, Goupell MJ. Binaural fusion: Complexities in definition and measurement. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:2395-2408. [PMID: 39392352 PMCID: PMC11470809 DOI: 10.1121/10.0030476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 09/10/2024] [Accepted: 09/20/2024] [Indexed: 10/12/2024]
Abstract
Despite the growing interest in studying binaural fusion, there is little consensus over its definition or how it is best measured. This review seeks to describe the complexities of binaural fusion, highlight measurement challenges, provide guidelines for rigorous perceptual measurements, and provide a working definition that encompasses this information. First, it is argued that binaural fusion may be multidimensional and might occur in one domain but not others, such as fusion in the spatial but not the spectral domain or vice versa. Second, binaural fusion may occur on a continuous scale rather than on a binary one. Third, binaural fusion responses are highly idiosyncratic, which could be a result of methodology, such as the specific experimental instructions, suggesting a need to explicitly report the instructions given. Fourth, it is possible that direct ("Did you hear one sound or two?") and indirect ("Where did the sound come from?" or "What was the pitch of the sound?") measurements of fusion will produce different results. In conclusion, explicit consideration of these attributes and reporting of methodology are needed for rigorous interpretation and comparison across studies and listener populations.
Collapse
Affiliation(s)
- Lina A J Reiss
- Oregon Hearing Research Center, Department of Otolaryngology, Oregon Health and Science University, Portland, Oregon 97239, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
13
|
Vannasing P, Dionne-Dostie E, Tremblay J, Paquette N, Collignon O, Gallagher A. Electrophysiological responses of audiovisual integration from infancy to adulthood. Brain Cogn 2024; 178:106180. [PMID: 38815526 DOI: 10.1016/j.bandc.2024.106180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 05/17/2024] [Accepted: 05/17/2024] [Indexed: 06/01/2024]
Abstract
Our ability to merge information from different senses into a unified percept is a crucial perceptual process for efficient interaction with our multisensory environment. Yet, the developmental process underlying how the brain implements multisensory integration (MSI) remains poorly known. This cross-sectional study aims to characterize the developmental patterns of audiovisual events in 131 individuals aged from 3 months to 30 years. Electroencephalography (EEG) was recorded during a passive task, including simple auditory, visual, and audiovisual stimuli. In addition to examining age-related variations in MSI responses, we investigated Event-Related Potentials (ERPs) linked with auditory and visual stimulation alone. This was done to depict the typical developmental trajectory of unisensory processing from infancy to adulthood within our sample and to contextualize the maturation effects of MSI in relation to unisensory development. Comparing the neural response to audiovisual stimuli to the sum of the unisensory responses revealed signs of MSI in the ERPs, more specifically between the P2 and N2 components (P2 effect). Furthermore, adult-like MSI responses emerge relatively late in the development, around 8 years old. The automatic integration of simple audiovisual stimuli is a long developmental process that emerges during childhood and continues to mature during adolescence with ERP latencies decreasing with age.
Collapse
Affiliation(s)
- Phetsamone Vannasing
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Emmanuelle Dionne-Dostie
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Julie Tremblay
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Natacha Paquette
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Olivier Collignon
- Institute of Psychology (IPSY) and Institute of Neuroscience (IoNS), Université Catholique de Louvain, Louvain-La-Neuve, Belgium; School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| | - Anne Gallagher
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada; Cerebrum, Department of Psychology, University of Montreal, Montreal, Qc, Canada.
| |
Collapse
|
14
|
Frumento S, Preatoni G, Chee L, Gemignani A, Ciotti F, Menicucci D, Raspopovic S. Unconscious multisensory integration: behavioral and neural evidence from subliminal stimuli. Front Psychol 2024; 15:1396946. [PMID: 39091706 PMCID: PMC11291458 DOI: 10.3389/fpsyg.2024.1396946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Accepted: 07/04/2024] [Indexed: 08/04/2024] Open
Abstract
Introduction The prevailing theories of consciousness consider the integration of different sensory stimuli as a key component for this phenomenon to rise on the brain level. Despite many theories and models have been proposed for multisensory integration between supraliminal stimuli (e.g., the optimal integration model), we do not know if multisensory integration occurs also for subliminal stimuli and what psychophysical mechanisms it follows. Methods To investigate this, subjects were exposed to visual (Virtual Reality) and/or haptic stimuli (Electro-Cutaneous Stimulation) above or below their perceptual threshold. They had to discriminate, in a two-Alternative Forced Choice Task, the intensity of unimodal and/or bimodal stimuli. They were then asked to discriminate the sensory modality while recording their EEG responses. Results We found evidence of multisensory integration for supraliminal condition, following the classical optimal model. Importantly, even for subliminal trials participant's performances in the bimodal condition were significantly more accurate when discriminating the intensity of the stimulation. Moreover, significant differences emerged between unimodal and bimodal activity templates in parieto-temporal areas known for their integrative role. Discussion These converging evidences - even if preliminary and needing confirmation from the collection of further data - suggest that subliminal multimodal stimuli can be integrated, thus filling a meaningful gap in the debate about the relationship between consciousness and multisensory integration.
Collapse
Affiliation(s)
- Sergio Frumento
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Greta Preatoni
- Laboratory for Neuroengineering, Department of Health Sciences and Technology, Institute of Robotics and Intelligent Systems, ETH Zürich, Zürich, Switzerland
| | - Lauren Chee
- Laboratory for Neuroengineering, Department of Health Sciences and Technology, Institute of Robotics and Intelligent Systems, ETH Zürich, Zürich, Switzerland
| | - Angelo Gemignani
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
- Clinical Psychology Branch, Azienda Ospedaliero-Universitaria Pisana, Pisa, Italy
| | - Federico Ciotti
- Laboratory for Neuroengineering, Department of Health Sciences and Technology, Institute of Robotics and Intelligent Systems, ETH Zürich, Zürich, Switzerland
| | - Danilo Menicucci
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Stanisa Raspopovic
- Laboratory for Neuroengineering, Department of Health Sciences and Technology, Institute of Robotics and Intelligent Systems, ETH Zürich, Zürich, Switzerland
| |
Collapse
|
15
|
Ampollini S, Ardizzi M, Ferroni F, Cigala A. Synchrony perception across senses: A systematic review of temporal binding window changes from infancy to adolescence in typical and atypical development. Neurosci Biobehav Rev 2024; 162:105711. [PMID: 38729280 DOI: 10.1016/j.neubiorev.2024.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 04/14/2024] [Accepted: 05/03/2024] [Indexed: 05/12/2024]
Abstract
Sensory integration is increasingly acknowledged as being crucial for the development of cognitive and social abilities. However, its developmental trajectory is still little understood. This systematic review delves into the topic by investigating the literature about the developmental changes from infancy through adolescence of the Temporal Binding Window (TBW) - the epoch of time within which sensory inputs are perceived as simultaneous and therefore integrated. Following comprehensive searches across PubMed, Elsevier, and PsycInfo databases, only experimental, behavioral, English-language, peer-reviewed studies on multisensory temporal processing in 0-17-year-olds have been included. Non-behavioral, non-multisensory, and non-human studies have been excluded as those that did not directly focus on the TBW. The selection process was independently performed by two Authors. The 39 selected studies involved 2859 participants in total. Findings indicate a predisposition towards cross-modal asynchrony sensitivity and a composite, still unclear, developmental trajectory, with atypical development associated to increased asynchrony tolerance. These results highlight the need for consistent and thorough research into TBW development to inform potential interventions.
Collapse
Affiliation(s)
- Silvia Ampollini
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy.
| | - Martina Ardizzi
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Francesca Ferroni
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Ada Cigala
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy
| |
Collapse
|
16
|
Purpura G, Petri S, Tancredi R, Tinelli F, Calderoni S. Haptic and visuo-haptic impairments for object recognition in children with autism spectrum disorder: focus on the sensory and multisensory processing dysfunctions. Exp Brain Res 2024; 242:1731-1744. [PMID: 38819648 PMCID: PMC11208199 DOI: 10.1007/s00221-024-06855-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 05/15/2024] [Indexed: 06/01/2024]
Abstract
Dysfunctions in sensory processing are widely described in individuals with autism spectrum disorder (ASD), although little is known about the developmental course and the impact of these difficulties on the learning processes during the preschool and school ages of ASD children. Specifically, as regards the interplay between visual and haptic information in ASD during developmental age, knowledge is very scarce and controversial. In this study, we investigated unimodal (visual and haptic) and cross-modal (visuo-haptic) processing skills aimed at object recognition through a behavioural paradigm already used in children with typical development (TD), with cerebral palsy and with peripheral visual impairments. Thirty-five children with ASD (age range: 5-11 years) and thirty-five age-matched and gender-matched typically developing peers were recruited. The procedure required participants to perform an object-recognition task relying on only the visual modality (black-and-white photographs), only the haptic modality (manipulation of real objects) and visuo-haptic transfer of these two types of information. Results are consistent with the idea that visuo-haptic transfer may be significantly worse in ASD children than in TD peers, leading to significant impairment in multisensory interactions for object recognition facilitation. Furthermore, ASD children tended to show a specific deficit in haptic information processing, while a similar trend of maturation of visual modality between the two groups is reported. This study adds to the current literature by suggesting that ASD differences in multisensory processes also regard visuo-haptic abilities necessary to identify and recognise objects of daily life.
Collapse
Affiliation(s)
- G Purpura
- School of Medicine and Surgery, University of Milano Bicocca, Monza, Italy
| | - S Petri
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), Università degli Studi di Genova, Genoa, Italy
| | - R Tancredi
- Department of Developmental Neuroscience, IRCCS Fondazione Stella Maris, Pisa, Italy
| | - F Tinelli
- Department of Developmental Neuroscience, IRCCS Fondazione Stella Maris, Pisa, Italy
| | - S Calderoni
- Department of Developmental Neuroscience, IRCCS Fondazione Stella Maris, Pisa, Italy.
- Department of Clinical and Experimental Medicine, University of Pisa, Via Roma 55, Pisa, 56126, Italy.
| |
Collapse
|
17
|
Mamassian P. Multistage maturation optimizes vision. Science 2024; 384:848-849. [PMID: 38781399 DOI: 10.1126/science.adp6594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
Late development of color vision improves object recognition.
Collapse
Affiliation(s)
- Pascal Mamassian
- Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, PSL University, CNRS, Paris, France
| |
Collapse
|
18
|
Cao S, Kelly J, Nyugen C, Chow HM, Leonardo B, Sabov A, Ciaramitaro VM. Prior visual experience increases children's use of effective haptic exploration strategies in audio-tactile sound-shape correspondences. J Exp Child Psychol 2024; 241:105856. [PMID: 38306737 DOI: 10.1016/j.jecp.2023.105856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 02/04/2024]
Abstract
Sound-shape correspondence refers to the preferential mapping of information across the senses, such as associating a nonsense word like bouba with rounded abstract shapes and kiki with spiky abstract shapes. Here we focused on audio-tactile (AT) sound-shape correspondences between nonsense words and abstract shapes that are felt but not seen. Despite previous research indicating a role for visual experience in establishing AT associations, it remains unclear how visual experience facilitates AT correspondences. Here we investigated one hypothesis: seeing the abstract shapes improve haptic exploration by (a) increasing effective haptic strategies and/or (b) decreasing ineffective haptic strategies. We analyzed five haptic strategies in video-recordings of 6- to 8-year-old children obtained in a previous study. We found the dominant strategy used to explore shapes differed based on visual experience. Effective strategies, which provide information about shape, were dominant in participants with prior visual experience, whereas ineffective strategies, which do not provide information about shape, were dominant in participants without prior visual experience. With prior visual experience, poking-an effective and efficient strategy-was dominant, whereas without prior visual experience, uncategorizable and ineffective strategies were dominant. These findings suggest that prior visual experience of abstract shapes in 6- to 8-year-olds can increase the effectiveness and efficiency of haptic exploration, potentially explaining why prior visual experience can increase the strength of AT sound-shape correspondences.
Collapse
Affiliation(s)
- Shibo Cao
- Department of Psychology, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Julia Kelly
- Department of Psychology, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Cuong Nyugen
- Department of Psychology, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Hiu Mei Chow
- Department of Psychology, University of Massachusetts Boston, Boston, MA 02125, USA; Department of Psychology, St. Thomas University, Fredericton, New Brunswick E3B 5G3, Canada
| | - Brianna Leonardo
- Department of Psychology, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Aleksandra Sabov
- Department of Psychology, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Vivian M Ciaramitaro
- Department of Psychology, University of Massachusetts Boston, Boston, MA 02125, USA.
| |
Collapse
|
19
|
Weng Y, Rong Y, Peng G. The development of audiovisual speech perception in Mandarin-speaking children: Evidence from the McGurk paradigm. Child Dev 2024; 95:750-765. [PMID: 37843038 DOI: 10.1111/cdev.14022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 08/30/2023] [Accepted: 09/21/2023] [Indexed: 10/17/2023]
Abstract
The developmental trajectory of audiovisual speech perception in Mandarin-speaking children remains understudied. This cross-sectional study in Mandarin-speaking 3- to 4-year-old, 5- to 6-year-old, 7- to 8-year-old children, and adults from Xiamen, China (n = 87, 44 males) investigated this issue using the McGurk paradigm with three levels of auditory noise. For the identification of congruent stimuli, 3- to 4-year-olds underperformed older groups whose performances were comparable. For the perception of the incongruent stimuli, a developmental shift was observed as 3- to 4-year-olds made significantly more audio-dominant but fewer audiovisual-integrated responses to incongruent stimuli than older groups. With increasing auditory noise, the difference between children and adults widened in identifying congruent stimuli but narrowed in perceiving incongruent ones. The findings regarding noise effects agree with the statistically optimal hypothesis.
Collapse
Affiliation(s)
- Yi Weng
- Department of Chinese and Bilingual Studies, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Yicheng Rong
- Department of Chinese and Bilingual Studies, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Gang Peng
- Department of Chinese and Bilingual Studies, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong SAR, China
| |
Collapse
|
20
|
Holmes CA, Cooney SM, Dempsey P, Newell FN. Developmental changes in the visual, haptic, and bimodal perception of geometric angles. J Exp Child Psychol 2024; 241:105870. [PMID: 38354447 DOI: 10.1016/j.jecp.2024.105870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 01/11/2024] [Accepted: 01/12/2024] [Indexed: 02/16/2024]
Abstract
Geometrical knowledge is typically taught to children through a combination of vision and repetitive drawing (i.e. haptics), yet our understanding of how different spatial senses contribute to geometric perception during childhood is poor. Studies of line orientation suggest a dominant role of vision affecting the calibration of haptics during development; however, the associated multisensory interactions underpinning angle perception are unknown. Here we examined visual, haptic, and bimodal perception of angles across three age groups of children: 6 to 8 years, 8 to 10 years, and 10 to 12 years, with age categories also representing their class (grade) in primary school. All participants first learned an angular shape, presented dynamically, in one of three sensory tracing conditions: visual only, haptic only, or bimodal exploration. At test, which was visual only, participants selected a target angle from four possible alternatives with distractor angle sizes varying relative to the target angle size. We found a clear improvement in accuracy of angle perception with development for all learning modalities. Angle perception in the youngest group was equally poor (but above chance) for all modalities; however, for the two older child groups, visual learning was better than haptics. Haptic perception did not improve to the level of vision with age (even in a comparison adult group), and we found no specific benefit for bimodal learning over visual learning in any age group, including adults. Our results support a developmental increment in both spatial accuracy and precision in all modalities, which was greater in vision than in haptics, and are consistent with previous accounts of cross-sensory calibration in the perception of geometric forms.
Collapse
Affiliation(s)
- Corinne A Holmes
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin 2, Ireland
| | - Sarah M Cooney
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin 2, Ireland; School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland
| | - Paula Dempsey
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin 2, Ireland
| | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin 2, Ireland; Department of Psychology, New York University Abu Dhabi, United Arab Emirates.
| |
Collapse
|
21
|
Cantarella G, Mioni G, Bisiacchi PS. Young adults and multisensory time perception: Visual and auditory pathways in comparison. Atten Percept Psychophys 2024; 86:1386-1399. [PMID: 37674041 PMCID: PMC11093818 DOI: 10.3758/s13414-023-02773-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/01/2023] [Indexed: 09/08/2023]
Abstract
The brain continuously encodes information about time, but how sensorial channels interact to achieve a stable representation of such ubiquitous information still needs to be determined. According to recent research, children show a potential interference in multisensory conditions, leading to a trade-off between two senses (sight and audition) when considering time-perception tasks. This study aimed to examine how healthy young adults behave when performing a time-perception task. In Experiment 1, we tested the effects of temporary sensory deprivation on both visual and auditory senses in a group of young adults. In Experiment 2, we compared the temporal performances of young adults in the auditory modality with those of two samples of children (sighted and sighted but blindfolded) selected from a previous study. Statistically significant results emerged when comparing the two pathways: young adults overestimated and showed a higher sensitivity to time in the auditory modality compared to the visual modality. Restricting visual and auditory input did not affect their time sensitivity. Moreover, children were more accurate at estimating time than young adults after a transient visual deprivation. This implies that as we mature, sensory deprivation does not constitute a benefit to time perception, and supports the hypothesis of a calibration process between senses with age. However, more research is needed to determine how this calibration process affects the developmental trajectories of time perception.
Collapse
Affiliation(s)
- Giovanni Cantarella
- Department of Psychology, University of Bologna, Viale Berti Pichat, 5, 40127, Bologna, Italy
| | - Giovanna Mioni
- Department of General Psychology, University of Padova, Via Venezia, 8, 35131, Padova, Italy
| | - Patrizia Silvia Bisiacchi
- Department of General Psychology, University of Padova, Via Venezia, 8, 35131, Padova, Italy.
- Padova Neuroscience Center, Padova, Italy.
| |
Collapse
|
22
|
Scheller M, Nardini M. Correctly establishing evidence for cue combination via gains in sensory precision: Why the choice of comparator matters. Behav Res Methods 2024; 56:2842-2858. [PMID: 37730934 PMCID: PMC11133123 DOI: 10.3758/s13428-023-02227-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/27/2023] [Indexed: 09/22/2023]
Abstract
Studying how sensory signals from different sources (sensory cues) are integrated within or across multiple senses allows us to better understand the perceptual computations that lie at the foundation of adaptive behaviour. As such, determining the presence of precision gains - the classic hallmark of cue combination - is important for characterising perceptual systems, their development and functioning in clinical conditions. However, empirically measuring precision gains to distinguish cue combination from alternative perceptual strategies requires careful methodological considerations. Here, we note that the majority of existing studies that tested for cue combination either omitted this important contrast, or used an analysis approach that, unknowingly, strongly inflated false positives. Using simulations, we demonstrate that this approach enhances the chances of finding significant cue combination effects in up to 100% of cases, even when cues are not combined. We establish how this error arises when the wrong cue comparator is chosen and recommend an alternative analysis that is easy to implement but has only been adopted by relatively few studies. By comparing combined-cue perceptual precision with the best single-cue precision, determined for each observer individually rather than at the group level, researchers can enhance the credibility of their reported effects. We also note that testing for deviations from optimal predictions alone is not sufficient to ascertain whether cues are combined. Taken together, to correctly test for perceptual precision gains, we advocate for a careful comparator selection and task design to ensure that cue combination is tested with maximum power, while reducing the inflation of false positives.
Collapse
Affiliation(s)
- Meike Scheller
- Department of Psychology, Durham University, Durham, UK.
| | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK
| |
Collapse
|
23
|
Sandini G, Sciutti A, Morasso P. Artificial cognition vs. artificial intelligence for next-generation autonomous robotic agents. Front Comput Neurosci 2024; 18:1349408. [PMID: 38585280 PMCID: PMC10995397 DOI: 10.3389/fncom.2024.1349408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 02/20/2024] [Indexed: 04/09/2024] Open
Abstract
The trend in industrial/service robotics is to develop robots that can cooperate with people, interacting with them in an autonomous, safe and purposive way. These are the fundamental elements characterizing the fourth and the fifth industrial revolutions (4IR, 5IR): the crucial innovation is the adoption of intelligent technologies that can allow the development of cyber-physical systems, similar if not superior to humans. The common wisdom is that intelligence might be provided by AI (Artificial Intelligence), a claim that is supported more by media coverage and commercial interests than by solid scientific evidence. AI is currently conceived in a quite broad sense, encompassing LLMs and a lot of other things, without any unifying principle, but self-motivating for the success in various areas. The current view of AI robotics mostly follows a purely disembodied approach that is consistent with the old-fashioned, Cartesian mind-body dualism, reflected in the software-hardware distinction inherent to the von Neumann computing architecture. The working hypothesis of this position paper is that the road to the next generation of autonomous robotic agents with cognitive capabilities requires a fully brain-inspired, embodied cognitive approach that avoids the trap of mind-body dualism and aims at the full integration of Bodyware and Cogniware. We name this approach Artificial Cognition (ACo) and ground it in Cognitive Neuroscience. It is specifically focused on proactive knowledge acquisition based on bidirectional human-robot interaction: the practical advantage is to enhance generalization and explainability. Moreover, we believe that a brain-inspired network of interactions is necessary for allowing humans to cooperate with artificial cognitive agents, building a growing level of personal trust and reciprocal accountability: this is clearly missing, although actively sought, in current AI. The ACo approach is a work in progress that can take advantage of a number of research threads, some of them antecedent the early attempts to define AI concepts and methods. In the rest of the paper we will consider some of the building blocks that need to be re-visited in a unitary framework: the principles of developmental robotics, the methods of action representation with prospection capabilities, and the crucial role of social interaction.
Collapse
Affiliation(s)
| | | | - Pietro Morasso
- Italian Institute of Technology, Cognitive Architecture for Collaborative Technologies (CONTACT) and Robotics, Brain and Cognitive Sciences (RBCS) Research Units, Genoa, Italy
| |
Collapse
|
24
|
Senna I, Piller S, Martolini C, Cocchi E, Gori M, Ernst MO. Multisensory training improves the development of spatial cognition after sight restoration from congenital cataracts. iScience 2024; 27:109167. [PMID: 38414862 PMCID: PMC10897914 DOI: 10.1016/j.isci.2024.109167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 11/04/2023] [Accepted: 02/05/2024] [Indexed: 02/29/2024] Open
Abstract
Spatial cognition and mobility are typically impaired in congenitally blind individuals, as vision usually calibrates space perception by providing the most accurate distal spatial cues. We have previously shown that sight restoration from congenital bilateral cataracts guides the development of more accurate space perception, even when cataract removal occurs years after birth. However, late cataract-treated individuals do not usually reach the performance levels of the typically sighted population. Here, we developed a brief multisensory training that associated audiovisual feedback with body movements. Late cataract-treated participants quickly improved their space representation and mobility, performing as well as typically sighted controls in most tasks. Their improvement was comparable with that of a group of blind participants, who underwent training coupling their movements with auditory feedback alone. These findings suggest that spatial cognition can be enhanced by a training program that strengthens the association between bodily movements and their sensory feedback (either auditory or audiovisual).
Collapse
Affiliation(s)
- Irene Senna
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, 89069 Ulm, Germany
- Department of Psychology, Liverpool Hope University, Liverpool L16 9JD, UK
| | - Sophia Piller
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, 89069 Ulm, Germany
| | - Chiara Martolini
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, 16152 Genova, Italy
| | - Elena Cocchi
- Istituto David Chiossone per Ciechi ed Ipovedenti ONLUS, 16145 Genova, Italy
| | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, 16152 Genova, Italy
| | - Marc O. Ernst
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, 89069 Ulm, Germany
| |
Collapse
|
25
|
Gori M, Sciutti A, Torazza D, Campus C, Bollini A. The effect of visuo-haptic exploration on the development of the geometric cross-sectioning ability. J Exp Child Psychol 2024; 238:105774. [PMID: 37703720 DOI: 10.1016/j.jecp.2023.105774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 08/23/2023] [Accepted: 08/23/2023] [Indexed: 09/15/2023]
Abstract
Cross-sectioning is a shape understanding task where the participants must infer and interpret the spatial features of three-dimensional (3D) solids by depicting their internal two-dimensional (2D) arrangement. An increasing body of research provides evidence of the crucial role of sensorimotor experience in acquiring these complex geometrical concepts. Here, we focused on how cross-sectioning ability emerges in young children and the influence of multisensory visuo-haptic experience in geometrical learning through two experiments. In Experiment 1, we compared the 3D printed version of the Santa Barbara Solids Test (SBST) with its classical paper version; in Experiment 2, we contrasted the children's performance in the SBST before and after the visual or visuo-haptic experience. In Experiment 1, we did not identify an advantage in visualizing 3D shapes over the classical 2D paper test. In contrast, in Experiment 2, we found that children who had the experience of a combination of visual and tactile information during the exploration phase improved their performance in the SBST compared with children who were limited to visual exploration. Our study demonstrates how practicing novel multisensory strategies improves children's understanding of complex geometrical concepts. This outcome highlights the importance of introducing multisensory experience in educational training and the need to make way for developing new technologies that could improve learning abilities in children.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, 16152 Genoa, Italy
| | | | - Diego Torazza
- Robotics, Brain and Cognitive Sciences Unit, Istituto Italiano di Tecnologia, 16152 Genoa, Italy; Mechanical Workshop, Istituto Italiano di Tecnologia, 16152 Genoa, Italy
| | - Claudio Campus
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, 16152 Genoa, Italy
| | - Alice Bollini
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, 16152 Genoa, Italy.
| |
Collapse
|
26
|
Zaidel A. Multisensory Calibration: A Variety of Slow and Fast Brain Processes Throughout the Lifespan. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:139-152. [PMID: 38270858 DOI: 10.1007/978-981-99-7611-9_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
From before we are born, throughout development, adulthood, and aging, we are immersed in a multisensory world. At each of these stages, our sensory cues are constantly changing, due to body, brain, and environmental changes. While integration of information from our different sensory cues improves precision, this only improves accuracy if the underlying cues are unbiased. Thus, multisensory calibration is a vital and ongoing process. To meet this grand challenge, our brains have evolved a variety of mechanisms. First, in response to a systematic discrepancy between sensory cues (without external feedback) the cues calibrate one another (unsupervised calibration). Second, multisensory function is calibrated to external feedback (supervised calibration). These two mechanisms superimpose. While the former likely reflects a lower level mechanism, the latter likely reflects a higher level cognitive mechanism. Indeed, neural correlates of supervised multisensory calibration in monkeys were found in higher level multisensory cortical area VIP, but not in the relatively lower level multisensory area MSTd. In addition, even without a cue discrepancy (e.g., when experiencing stimuli from different sensory cues in series) the brain monitors supra-modal statistics of events in the environment and adapts perception cross-modally. This too comprises a variety of mechanisms, including confirmation bias to prior choices, and lower level cross-sensory adaptation. Further research into the neuronal underpinnings of the broad and diverse functions of multisensory calibration, with improved synthesis of theories is needed to attain a more comprehensive understanding of multisensory brain function.
Collapse
Affiliation(s)
- Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, Israel.
| |
Collapse
|
27
|
Jones SA, Noppeney U. Multisensory Integration and Causal Inference in Typical and Atypical Populations. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:59-76. [PMID: 38270853 DOI: 10.1007/978-981-99-7611-9_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory perception is critical for effective interaction with the environment, but human responses to multisensory stimuli vary across the lifespan and appear changed in some atypical populations. In this review chapter, we consider multisensory integration within a normative Bayesian framework. We begin by outlining the complex computational challenges of multisensory causal inference and reliability-weighted cue integration, and discuss whether healthy young adults behave in accordance with normative Bayesian models. We then compare their behaviour with various other human populations (children, older adults, and those with neurological or neuropsychiatric disorders). In particular, we consider whether the differences seen in these groups are due only to changes in their computational parameters (such as sensory noise or perceptual priors), or whether the fundamental computational principles (such as reliability weighting) underlying multisensory perception may also be altered. We conclude by arguing that future research should aim explicitly to differentiate between these possibilities.
Collapse
Affiliation(s)
- Samuel A Jones
- Department of Psychology, Nottingham Trent University, Nottingham, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
28
|
Ross CF, Bernhard CB, Surette V, Hasted A, Wakeling I, Smith-Simpson S. The influence of food sensory properties on eating behaviours in children with Down syndrome. Food Res Int 2024; 175:113749. [PMID: 38128994 DOI: 10.1016/j.foodres.2023.113749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 11/15/2023] [Accepted: 11/22/2023] [Indexed: 12/23/2023]
Abstract
Developing new food products for children is challenging, particularly in vulnerable groups including children with Down syndrome (DS). Focusing on children with DS, the aim of this study was to study the influence of parent liking on acceptance of food products by children with DS and demonstrate the influence of food sensory properties on indicators of food acceptance, food rejection, and challenging eating behaviours. Children (ages 1158 months) with DS (n = 111) participated in a home use test evaluating snack products with varying sensory properties as profiled by a trained sensory panel. Parents recorded their children's reactions to each food product; trained coders coded videos for eating behaviours. To understand the influence of each sensory modality on eating behaviour, ordered probit regression models were run. Results found a significant correlation between the parent liking and overall child disposition to the food (p < 0.05). From the regression analysis, the inclusion of all food sensory properties, including texture, flavour, taste, product shape and size, improved the percentage of variance explained in child mealtime behaviours and overall disposition over the base model (containing no sensory modalities), with texture having the largest influence. Overstuffing the mouth, a challenging eating behaviour, was most influenced by product texture (children ≥ 30 months), and product texture and size (children < 30 months). In both age groups, coughing/choking/gagging was most influenced by food texture and was associated with a product that was grainy and angular (sharp corners). In both age groups, product acceptance was associated with a product that was dissolvable, crispy, and savoury while rejection was associated with a dense, gummy and fruity product. These results suggest that a dissolvable, crispy texture, with a cheesy or buttery flavour are the sensory properties important in a desirable flavoured commercial snack product for children with DS; however, overall disposition must be balanced against mouth overstuffing.
Collapse
Affiliation(s)
- Carolyn F Ross
- School of Food Science, Washington State University, Pullman, WA, USA.
| | - C B Bernhard
- School of Food Science, Washington State University, Pullman, WA, USA
| | - Victoria Surette
- School of Food Science, Washington State University, Pullman, WA, USA
| | | | | | | |
Collapse
|
29
|
Bertonati G, Amadeo MB, Campus C, Gori M. Task-dependent spatial processing in the visual cortex. Hum Brain Mapp 2023; 44:5972-5981. [PMID: 37811869 PMCID: PMC10619374 DOI: 10.1002/hbm.26489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 07/31/2023] [Accepted: 08/30/2023] [Indexed: 10/10/2023] Open
Abstract
To solve spatial tasks, the human brain asks for support from the visual cortices. Nonetheless, representing spatial information is not fixed but depends on the reference frames in which the spatial inputs are involved. The present study investigates how the kind of spatial representations influences the recruitment of visual areas during multisensory spatial tasks. Our study tested participants in an electroencephalography experiment involving two audio-visual (AV) spatial tasks: a spatial bisection, in which participants estimated the relative position in space of an AV stimulus in relation to the position of two other stimuli, and a spatial localization, in which participants localized one AV stimulus in relation to themselves. Results revealed that spatial tasks specifically modulated the occipital event-related potentials (ERPs) after the onset of the stimuli. We observed a greater contralateral early occipital component (50-90 ms) when participants solved the spatial bisection, and a more robust later occipital response (110-160 ms) when they processed the spatial localization. This observation suggests that different spatial representations elicited by multisensory stimuli are sustained by separate neurophysiological mechanisms.
Collapse
Affiliation(s)
- G. Bertonati
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS)Università degli Studi di GenovaGenoaItaly
| | - M. B. Amadeo
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - C. Campus
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - M. Gori
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| |
Collapse
|
30
|
Casado-Palacios M, Tonelli A, Campus C, Gori M. Movement-related tactile gating in blindness. Sci Rep 2023; 13:16553. [PMID: 37783746 PMCID: PMC10545755 DOI: 10.1038/s41598-023-43526-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 09/25/2023] [Indexed: 10/04/2023] Open
Abstract
When we perform an action, self-elicited movement induces suppression of somatosensory information to the cortex, requiring a correct motor-sensory and inter-sensory (i.e. cutaneous senses, kinesthesia, and proprioception) integration processes to be successful. However, recent works show that blindness might impact some of these elements. The current study investigates the effect of movement on tactile perception and the role of vision in this process. We measured the velocity discrimination threshold in 18 sighted and 18 blind individuals by having them perceive a sequence of two movements and discriminate the faster one in passive and active touch conditions. Participants' Just Noticeable Difference (JND) was measured to quantify their precision. Results showed a generally worse performance during the active touch condition compared to the passive. In particular, this difference was significant in the blind group, regardless of the blindness duration, but not in the sighted one. These findings suggest that the absence of visual calibration impacts motor-sensory and inter-sensory integration required during movement, diminishing the reliability of tactile signals in blind individuals. Our work spotlights the need for intervention in this population and should be considered in the sensory substitution/reinforcement device design.
Collapse
Affiliation(s)
- Maria Casado-Palacios
- DIBRIS, University of Genoa, Genoa, Italy
- UVIP- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
| | - Alessia Tonelli
- UVIP- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
| | - Claudio Campus
- UVIP- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
| | - Monica Gori
- UVIP- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy.
| |
Collapse
|
31
|
Bruns P, Röder B. Development and experience-dependence of multisensory spatial processing. Trends Cogn Sci 2023; 27:961-973. [PMID: 37208286 DOI: 10.1016/j.tics.2023.04.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 04/24/2023] [Accepted: 04/25/2023] [Indexed: 05/21/2023]
Abstract
Multisensory spatial processes are fundamental for efficient interaction with the world. They include not only the integration of spatial cues across sensory modalities, but also the adjustment or recalibration of spatial representations to changing cue reliabilities, crossmodal correspondences, and causal structures. Yet how multisensory spatial functions emerge during ontogeny is poorly understood. New results suggest that temporal synchrony and enhanced multisensory associative learning capabilities first guide causal inference and initiate early coarse multisensory integration capabilities. These multisensory percepts are crucial for the alignment of spatial maps across sensory systems, and are used to derive more stable biases for adult crossmodal recalibration. The refinement of multisensory spatial integration with increasing age is further promoted by the inclusion of higher-order knowledge.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
32
|
Hughes L, Kargas N, Wilhelm M, Meyerhoff HS, Föcker J. The Impact of Audio-Visual, Visual and Auditory Cues on Multiple Object Tracking Performance in Children with Autism. Percept Mot Skills 2023; 130:2047-2068. [PMID: 37452765 PMCID: PMC10552336 DOI: 10.1177/00315125231187984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/18/2023]
Abstract
Previous studies have documented differences in processing multisensory information by children with autism compared to typically developing children. Furthermore, children with autism have been found to track fewer multiple objects on a screen than those without autism, suggesting reduced attentional control. In the present study, we investigated whether children with autism (n = 33) and children without autism (n = 33) were able to track four target objects moving amongst four indistinguishable distractor objects while sensory cues were presented. During tracking, we presented various types of cues - auditory, visual, or audio-visual or no cues while target objects bounced off the inner boundary of a centralized circle. We found that children with autism tracked fewer targets than children without autism. Furthermore, children without autism showed improved tracking performance in the presence of visual cues, whereas children with autism did not benefit from sensory cues. Whereas multiple object tracking performance improved with increasing age in children without autism, especially when using audio-visual cues, children with autism did not show age-related improvement in tracking. These results are in line with the hypothesis that attention and the ability to integrate sensory cues during tracking are reduced in children with autism. Our findings could contribute valuable insights for designing interventions that incorporate multisensory information.
Collapse
Affiliation(s)
- Lily Hughes
- School of Psychology, College of Social Science, University of Lincoln, Lincoln, UK
| | - Niko Kargas
- School of Psychology, College of Social Science, University of Lincoln, Lincoln, UK
| | - Maximilian Wilhelm
- Center for Psychotherapy Research, University Hospital Heidelberg, Heidelberg, Germany
| | | | - Julia Föcker
- School of Psychology, College of Social Science, University of Lincoln, Lincoln, UK
| |
Collapse
|
33
|
Newman PM, Qi Y, Mou W, McNamara TP. Statistically Optimal Cue Integration During Human Spatial Navigation. Psychon Bull Rev 2023; 30:1621-1642. [PMID: 37038031 DOI: 10.3758/s13423-023-02254-w] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/08/2023] [Indexed: 04/12/2023]
Abstract
In 2007, Cheng and colleagues published their influential review wherein they analyzed the literature on spatial cue interaction during navigation through a Bayesian lens, and concluded that models of optimal cue integration often applied in psychophysical studies could explain cue interaction during navigation. Since then, numerous empirical investigations have been conducted to assess the degree to which human navigators are optimal when integrating multiple spatial cues during a variety of navigation-related tasks. In the current review, we discuss the literature on human cue integration during navigation that has been published since Cheng et al.'s original review. Evidence from most studies demonstrate optimal navigation behavior when humans are presented with multiple spatial cues. However, applications of optimal cue integration models vary in their underlying assumptions (e.g., uninformative priors and decision rules). Furthermore, cue integration behavior depends in part on the nature of the cues being integrated and the navigational task (e.g., homing versus non-home goal localization). We discuss the implications of these models and suggest directions for future research.
Collapse
Affiliation(s)
- Phillip M Newman
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37240, USA.
| | - Yafei Qi
- Department of Psychology, P-217 Biological Sciences Building, University of Alberta, Edmonton, Alberta, T6G 2R3, Canada
| | - Weimin Mou
- Department of Psychology, P-217 Biological Sciences Building, University of Alberta, Edmonton, Alberta, T6G 2R3, Canada
| | - Timothy P McNamara
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37240, USA
| |
Collapse
|
34
|
Newell FN, McKenna E, Seveso MA, Devine I, Alahmad F, Hirst RJ, O'Dowd A. Multisensory perception constrains the formation of object categories: a review of evidence from sensory-driven and predictive processes on categorical decisions. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220342. [PMID: 37545304 PMCID: PMC10404931 DOI: 10.1098/rstb.2022.0342] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 06/29/2023] [Indexed: 08/08/2023] Open
Abstract
Although object categorization is a fundamental cognitive ability, it is also a complex process going beyond the perception and organization of sensory stimulation. Here we review existing evidence about how the human brain acquires and organizes multisensory inputs into object representations that may lead to conceptual knowledge in memory. We first focus on evidence for two processes on object perception, multisensory integration of redundant information (e.g. seeing and feeling a shape) and crossmodal, statistical learning of complementary information (e.g. the 'moo' sound of a cow and its visual shape). For both processes, the importance attributed to each sensory input in constructing a multisensory representation of an object depends on the working range of the specific sensory modality, the relative reliability or distinctiveness of the encoded information and top-down predictions. Moreover, apart from sensory-driven influences on perception, the acquisition of featural information across modalities can affect semantic memory and, in turn, influence category decisions. In sum, we argue that both multisensory processes independently constrain the formation of object categories across the lifespan, possibly through early and late integration mechanisms, respectively, to allow us to efficiently achieve the everyday, but remarkable, ability of recognizing objects. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- F. N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - E. McKenna
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - M. A. Seveso
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - I. Devine
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - F. Alahmad
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - R. J. Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - A. O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| |
Collapse
|
35
|
Cheam C, Barisnikov K, Gentaz E, Lejeune F. Multisensory Texture Perception in Individuals with Williams Syndrome. CHILDREN (BASEL, SWITZERLAND) 2023; 10:1494. [PMID: 37761455 PMCID: PMC10528637 DOI: 10.3390/children10091494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 08/25/2023] [Accepted: 08/29/2023] [Indexed: 09/29/2023]
Abstract
The sensory profile of people with Williams syndrome (WS) is characterised by atypical visual and auditory perceptions that affect their daily lives and learning. However, no research has been carried out on the haptic perception, in particular in multisensory (visual and haptic) situations. The aim of this study was to evaluate the communication of texture information from one modality to the other in people with WS. Children and adults with WS were included, as well as typically developing (TD) participants matched on chronological age (TD-CA), and TD children matched on mental age (TD-MA). All participants (N = 69) completed three matching tasks in which they had to compare two fabrics (same or different): visual, haptic and visuo-haptic. When the textures were different, the haptic and visual performances of people with WS were similar to those of TD-MA participants. Moreover, their visuo-haptic performances were lower than those of the two TD groups. These results suggest a delay in the acquisition of multisensory transfer abilities in individuals with WS. A positive link between MA and visual and visuo-haptic abilities only in people with WS suggests that they could benefit from an early intervention to develop their abilities to process and transfer multisensory information.
Collapse
Affiliation(s)
- Caroline Cheam
- Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva, 1205 Geneva, Switzerland; (C.C.); (K.B.)
| | - Koviljka Barisnikov
- Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva, 1205 Geneva, Switzerland; (C.C.); (K.B.)
| | - Edouard Gentaz
- Sensorimotor, Affective and Social Development Unit (SMAS), Faculty of Psychology and Educational Sciences, University of Geneva, 1205 Geneva, Switzerland;
| | - Fleur Lejeune
- Sensorimotor, Affective and Social Development Unit (SMAS), Faculty of Psychology and Educational Sciences, University of Geneva, 1205 Geneva, Switzerland;
| |
Collapse
|
36
|
Streri A, de Hevia MD. How do human newborns come to understand the multimodal environment? Psychon Bull Rev 2023; 30:1171-1186. [PMID: 36862372 DOI: 10.3758/s13423-023-02260-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/18/2023] [Indexed: 03/03/2023]
Abstract
For a long time, newborns were considered as human beings devoid of perceptual abilities who had to learn with effort everything about their physical and social environment. Extensive empirical evidence gathered in the last decades has systematically invalidated this notion. Despite the relatively immature state of their sensory modalities, newborns have perceptions that are acquired, and are triggered by, their contact with the environment. More recently, the study of the fetal origins of the sensory modes has revealed that in utero all the senses prepare to operate, except for the vision mode, which is only functional starting from the first minutes after birth. This discrepancy between the maturation of the different senses leads to the question of how human newborns come to understand our multimodal and complex environment. More precisely, how the visual mode interacts with the tactile and auditory modes from birth. After having defined the tools that newborns use to interact with other sensory modalities, we review studies across different fields of research such as the intermodal transfer between touch and vision, auditory-visual speech perception, and the existence of links between the dimensions of space, time, and number. Overall, evidence from these studies supports the idea that human newborns are spontaneously driven, and cognitively equipped, to link information collected by the different sensory modes in order to create a representation of a stable world.
Collapse
Affiliation(s)
- Arlette Streri
- Université Paris Cité, CNRS, Integrative Neuroscience and Cognition Center, F-75006, Paris, France
| | - Maria Dolores de Hevia
- Université Paris Cité, CNRS, Integrative Neuroscience and Cognition Center, F-75006, Paris, France.
| |
Collapse
|
37
|
Morelli F, Schiatti L, Cappagli G, Martolini C, Gori M, Signorini S. Clinical assessment of the TechArm system on visually impaired and blind children during uni- and multi-sensory perception tasks. Front Neurosci 2023; 17:1158438. [PMID: 37332868 PMCID: PMC10272406 DOI: 10.3389/fnins.2023.1158438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/12/2023] [Indexed: 06/20/2023] Open
Abstract
We developed the TechArm system as a novel technological tool intended for visual rehabilitation settings. The system is designed to provide a quantitative assessment of the stage of development of perceptual and functional skills that are normally vision-dependent, and to be integrated in customized training protocols. Indeed, the system can provide uni- and multisensory stimulation, allowing visually impaired people to train their capability of correctly interpreting non-visual cues from the environment. Importantly, the TechArm is suitable to be used by very young children, when the rehabilitative potential is maximal. In the present work, we validated the TechArm system on a pediatric population of low-vision, blind, and sighted children. In particular, four TechArm units were used to deliver uni- (audio or tactile) or multi-sensory stimulation (audio-tactile) on the participant's arm, and subject was asked to evaluate the number of active units. Results showed no significant difference among groups (normal or impaired vision). Overall, we observed the best performance in tactile condition, while auditory accuracy was around chance level. Also, we found that the audio-tactile condition is better than the audio condition alone, suggesting that multisensory stimulation is beneficial when perceptual accuracy and precision are low. Interestingly, we observed that for low-vision children the accuracy in audio condition improved proportionally to the severity of the visual impairment. Our findings confirmed the TechArm system's effectiveness in assessing perceptual competencies in sighted and visually impaired children, and its potential to be used to develop personalized rehabilitation programs for people with visual and sensory impairments.
Collapse
Affiliation(s)
- Federica Morelli
- Developmental Neuro-Ophthalmology Unit, IRCCS Mondino Foundation, Pavia, Italy
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Lucia Schiatti
- Computer Science and Artificial Intelligence Lab and Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Boston, MA, United States
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Giulia Cappagli
- Developmental Neuro-Ophthalmology Unit, IRCCS Mondino Foundation, Pavia, Italy
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Chiara Martolini
- Developmental Neuro-Ophthalmology Unit, IRCCS Mondino Foundation, Pavia, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Sabrina Signorini
- Developmental Neuro-Ophthalmology Unit, IRCCS Mondino Foundation, Pavia, Italy
| |
Collapse
|
38
|
Stanley BM, Chen YC, Maurer D, Lewis TL, Shore DI. Developmental changes in audiotactile event perception. J Exp Child Psychol 2023; 230:105629. [PMID: 36731280 DOI: 10.1016/j.jecp.2023.105629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 01/04/2023] [Accepted: 01/05/2023] [Indexed: 02/04/2023]
Abstract
The fission and fusion illusions provide measures of multisensory integration. The sound-induced tap fission illusion occurs when a tap is paired with two distractor sounds, resulting in the perception of two taps; the sound-induced tap fusion illusion occurs when two taps are paired with a single sound, resulting in the perception of a single tap. Using these illusions, we measured integration in three groups of children (9-, 11-, and 13-year-olds) and compared them with a group of adults. Based on accuracy, we derived a measure of magnitude of illusion and used a signal detection analysis to estimate perceptual discriminability and decisional criterion. All age groups showed a significant fission illusion, whereas only the three groups of children showed a significant fusion illusion. When compared with adults, the 9-year-olds showed larger fission and fusion illusions (i.e., reduced discriminability and greater bias), whereas the 11-year-olds were adult-like for fission but showed some differences for fusion: significantly worse discriminability and marginally greater magnitude and criterion. The 13-year-olds were adult-like on all measures. Based on the pattern of data, we speculate that the developmental trajectories for fission and fusion differ. We discuss these developmental results in the context of three non-mutually exclusive theoretical frameworks: sensory dominance, maximum likelihood estimation, and causal inference.
Collapse
Affiliation(s)
- Brendan M Stanley
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada
| | - Yi-Chuan Chen
- Department of Medicine, Mackay Medical College, New Taipei City 252, Taiwan
| | - Daphne Maurer
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada
| | - Terri L Lewis
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada
| | - David I Shore
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada; Multisensory Perception Laboratory, Division of Multisensory Mind Inc., Hamilton, Ontario L8S 4K1, Canada.
| |
Collapse
|
39
|
Murray CA, Shams L. Crossmodal interactions in human learning and memory. Front Hum Neurosci 2023; 17:1181760. [PMID: 37266327 PMCID: PMC10229776 DOI: 10.3389/fnhum.2023.1181760] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 05/02/2023] [Indexed: 06/03/2023] Open
Abstract
Most studies of memory and perceptual learning in humans have employed unisensory settings to simplify the study paradigm. However, in daily life we are often surrounded by complex and cluttered scenes made up of many objects and sources of sensory stimulation. Our experiences are, therefore, highly multisensory both when passively observing the world and when acting and navigating. We argue that human learning and memory systems are evolved to operate under these multisensory and dynamic conditions. The nervous system exploits the rich array of sensory inputs in this process, is sensitive to the relationship between the sensory inputs, and continuously updates sensory representations, and encodes memory traces based on the relationship between the senses. We review some recent findings that demonstrate a range of human learning and memory phenomena in which the interactions between visual and auditory modalities play an important role, and suggest possible neural mechanisms that can underlie some surprising recent findings. We outline open questions as well as directions of future research to unravel human perceptual learning and memory.
Collapse
Affiliation(s)
- Carolyn A. Murray
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
| | - Ladan Shams
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
- Department of Bioengineering, Neuroscience Interdepartmental Program, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
40
|
Davide E, Jenifer M, Alessia T, Alberto M, Monica G. Young children can use their subjective straight-ahead to remap visuo-motor alterations. Sci Rep 2023; 13:6427. [PMID: 37081091 PMCID: PMC10119127 DOI: 10.1038/s41598-023-33127-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 04/07/2023] [Indexed: 04/22/2023] Open
Abstract
Young children and adults process spatial information differently: the former use their bodies as primary reference, while adults seem capable of using abstract frames. The transition is estimated to occur between the 6th and the 12th year of age. The mechanisms underlying spatial encoding in children and adults are unclear, as well as those underlying the transition. Here, we investigated the role of the subjective straight-ahead (SSA), the body antero-posterior half-plane mental model, in spatial encoding before and after the expected transition. We tested 6-7-year-old and 10-11-year-old children, and adults on a spatial alignment task in virtual reality, searching for differences in performance when targets were placed frontally or sideways. The performance differences were assessed both in a naturalistic baseline condition and in a test condition that discouraged using body-centered coordinates through a head-related visuo-motor conflict. We found no differences in the baseline condition, while all groups showed differences between central and lateral targets (SSA effect) in the visuo-motor conflict condition, and 6-7-year-old children showed the largest effect. These results confirm the expected transition timing; moreover, they suggest that children can abstract from the body using their SSA and that the transition underlies the maturation of a world-centered reference frame.
Collapse
Affiliation(s)
- Esposito Davide
- Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, 16163, Genova, Italy.
| | - Miehlbradt Jenifer
- Bertarelli Foundation Chair in Translational Neuroengineering, EPFL, 1015, Lausanne, Switzerland
| | - Tonelli Alessia
- Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, 16163, Genova, Italy
| | - Mazzoni Alberto
- The Biorobotics Institute, Scuola Superiore Sant'Anna, 56127, Pontedera, Italy
| | - Gori Monica
- Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, 16163, Genova, Italy
| |
Collapse
|
41
|
Navarro-Guerrero N, Toprak S, Josifovski J, Jamone L. Visuo-haptic object perception for robots: an overview. Auton Robots 2023. [DOI: 10.1007/s10514-023-10091-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
Abstract
AbstractThe object perception capabilities of humans are impressive, and this becomes even more evident when trying to develop solutions with a similar proficiency in autonomous robots. While there have been notable advancements in the technologies for artificial vision and touch, the effective integration of these two sensory modalities in robotic applications still needs to be improved, and several open challenges exist. Taking inspiration from how humans combine visual and haptic perception to perceive object properties and drive the execution of manual tasks, this article summarises the current state of the art of visuo-haptic object perception in robots. Firstly, the biological basis of human multimodal object perception is outlined. Then, the latest advances in sensing technologies and data collection strategies for robots are discussed. Next, an overview of the main computational techniques is presented, highlighting the main challenges of multimodal machine learning and presenting a few representative articles in the areas of robotic object recognition, peripersonal space representation and manipulation. Finally, informed by the latest advancements and open challenges, this article outlines promising new research directions.
Collapse
|
42
|
Shvadron S, Snir A, Maimon A, Yizhar O, Harel S, Poradosu K, Amedi A. Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device. Front Hum Neurosci 2023; 17:1058617. [PMID: 36936618 PMCID: PMC10017858 DOI: 10.3389/fnhum.2023.1058617] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 01/09/2023] [Indexed: 03/06/2023] Open
Abstract
Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes' identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.
Collapse
Affiliation(s)
- Shira Shvadron
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- *Correspondence: Shira Shvadron,
| | - Adi Snir
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Amber Maimon
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Or Yizhar
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development, Berlin, Germany
- Max Planck Dahlem Campus of Cognition (MPDCC), Max Planck Institute for Human Development, Berlin, Germany
| | - Sapir Harel
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Keinan Poradosu
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- Weizmann Institute of Science, Rehovot, Israel
| | - Amir Amedi
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
43
|
Aston S, Pattie C, Graham R, Slater H, Beierholm U, Nardini M. Newly learned shape-color associations show signatures of reliability-weighted averaging without forced fusion or a memory color effect. J Vis 2022; 22:8. [PMID: 36580296 PMCID: PMC9804025 DOI: 10.1167/jov.22.13.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Reliability-weighted averaging of multiple perceptual estimates (or cues) can improve precision. Research suggests that newly learned statistical associations can be rapidly integrated in this way for efficient decision-making. Yet, it remains unclear if the integration of newly learned statistics into decision-making can directly influence perception, rather than taking place only at the decision stage. In two experiments, we implicitly taught observers novel associations between shape and color. Observers made color matches by adjusting the color of an oval to match a simultaneously presented reference. As the color of the oval changed across trials, so did its shape according to a novel mapping of axis ratio to color. Observers showed signatures of reliability-weighted averaging-a precision improvement in both experiments and reweighting of the newly learned shape cue with changes in uncertainty in Experiment 2. To ask whether this was accompanied by perceptual effects, Experiment 1 tested for forced fusion by measuring color discrimination thresholds with and without incongruent novel cues. Experiment 2 tested for a memory color effect, observers adjusting the color of ovals with different axis ratios until they appeared gray. There was no evidence for forced fusion and the opposite of a memory color effect. Overall, our results suggest that the ability to quickly learn novel cues and integrate them with familiar cues is not immediately (within the short duration of our experiments and in the domain of color and shape) accompanied by common perceptual effects.
Collapse
Affiliation(s)
- Stacey Aston
- Department of Psychology, Durham University, Durham, UK,
| | - Cat Pattie
- Biosciences Institute, Newcastle University, Newcastle, UK,
| | - Rachael Graham
- Department of Psychology, Durham University, Durham, UK,
| | - Heather Slater
- Department of Psychology, Durham University, Durham, UK,
| | | | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK,
| |
Collapse
|
44
|
Purpura G, Lai CYY, Previtali G, Gomez INB, Yung TWK, Tagliabue L, Cerroni F, Carotenuto M, Nacinovich R. Psychometric Properties of the Italian Version of Sensory Processing and Self-Regulation Checklist (SPSRC). Healthcare (Basel) 2022; 11:healthcare11010092. [PMID: 36611551 PMCID: PMC9818758 DOI: 10.3390/healthcare11010092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/15/2022] [Accepted: 12/23/2022] [Indexed: 12/29/2022] Open
Abstract
Sensory processing abilities play important roles in child learning, behavioural and emotional regulation, and motor development. Moreover, it was widely demonstrated that numerous children with neurodevelopmental disabilities show differences in sensory processing abilities and self-regulation compared with those of typical children. For these reasons, a complete evaluation of early symptoms is very important, and specific tools are necessary to better understand and recognize these difficulties during childhood. The main aim of this study was to translate, culturally adapt, and validate in a population of Italian typically developing (TD) children the Sensory Processing and Self-Regulation Checklist (SPSRC), a 130-item caregiver-reported checklist, covering children's sensory processing and self-regulation performance in daily life. Preliminary testing of the SPSRC-IT was carried out in a sample of 312 TD children and 30 children with various developmental disabilities. The findings showed that the SPSRC-IT had high internal consistency, a good discriminant, and structural and criterion validity about the sensory processing and self-regulation abilities of children with and without disabilities. These data provide initial evidence on the reliability and validity of SPSRC-IT, and the information obtained by using the SPSRC-IT may be considered a starting point to widen the current understanding of sensory processing difficulties among children.
Collapse
Affiliation(s)
- Giulia Purpura
- School of Medicine and Surgery, University of Milano Bicocca, 20900 Monza, Italy
- Correspondence: ; Tel.: +39-039-233-9279
| | - Cynthia Y. Y. Lai
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Giulia Previtali
- School of Medicine and Surgery, University of Milano Bicocca, 20900 Monza, Italy
| | - Ivan Neil B. Gomez
- Department of Occupational Therapy, University of Santo Tomas, Manila 1015, Philippines
| | - Trevor W. K. Yung
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Luca Tagliabue
- School of Medicine and Surgery, University of Milano Bicocca, 20900 Monza, Italy
- Child and Adolescent Health Department, San Gerardo Hospital, ASST of Monza, 20900 Monza, Italy
| | - Francesco Cerroni
- Clinic of Child and Adolescent Neuropsychiatry, Università degli Studi della Campania “Luigi Vanvitelli”, 81100 Caserta, Italy
| | - Marco Carotenuto
- Clinic of Child and Adolescent Neuropsychiatry, Università degli Studi della Campania “Luigi Vanvitelli”, 81100 Caserta, Italy
| | - Renata Nacinovich
- School of Medicine and Surgery, University of Milano Bicocca, 20900 Monza, Italy
- Child and Adolescent Health Department, San Gerardo Hospital, ASST of Monza, 20900 Monza, Italy
| |
Collapse
|
45
|
The development of audio-visual temporal precision precedes its rapid recalibration. Sci Rep 2022; 12:21591. [PMID: 36517503 PMCID: PMC9751280 DOI: 10.1038/s41598-022-25392-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 11/29/2022] [Indexed: 12/15/2022] Open
Abstract
Through development, multisensory systems reach a balance between stability and flexibility: the systems integrate optimally cross-modal signals from the same events, while remaining adaptive to environmental changes. Is continuous intersensory recalibration required to shape optimal integration mechanisms, or does multisensory integration develop prior to recalibration? Here, we examined the development of multisensory integration and rapid recalibration in the temporal domain by re-analyzing published datasets for audio-visual, audio-tactile, and visual-tactile combinations. Results showed that children reach an adult level of precision in audio-visual simultaneity perception and show the first sign of rapid recalibration at 9 years of age. In contrast, there was very weak rapid recalibration for other cross-modal combinations at all ages, even when adult levels of temporal precision had developed. Thus, the development of audio-visual rapid recalibration appears to require the maturation of temporal precision. It may serve to accommodate distance-dependent travel time differences between light and sound.
Collapse
|
46
|
Negen J, Slater H, Bird LA, Nardini M. Internal biases are linked to disrupted cue combination in children and adults. J Vis 2022; 22:14. [DOI: 10.1167/jov.22.12.14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Affiliation(s)
- James Negen
- School of Psychology, Liverpool John Moores University, Liverpool, UK
| | | | - Laura-Ashleigh Bird
- Department of Psychology, Durham University, Durham, UK
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK
| |
Collapse
|
47
|
The role of hand size in body representation: a developmental investigation. Sci Rep 2022; 12:19281. [PMID: 36369342 PMCID: PMC9652309 DOI: 10.1038/s41598-022-23716-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 11/03/2022] [Indexed: 11/13/2022] Open
Abstract
Knowledge of one's own body size is a crucial facet of body representation, both for acting on the environment and perhaps also for constraining body ownership. However, representations of body size may be somewhat plastic, particularly to allow for physical growth in childhood. Here we report a developmental investigation into the role of hand size in body representation (the sense of body ownership, perception of hand position, and perception of own-hand size). Using the rubber hand illusion paradigm, this study used different fake hand sizes (60%, 80%, 100%, 120% or 140% of typical size) in three age groups (6- to 7-year-olds, 12- to 13-year-olds, and adults; N = 229). We found no evidence that hand size constrains ownership or position: participants embodied hands which were both larger and smaller than their own, and indeed judged their own hands to have changed size following the illusion. Children and adolescents embodied the fake hands more than adults, with a greater tendency to feel their own hand had changed size. Adolescents were particularly sensitive to multisensory information. In sum, we found substantial plasticity in the representation of own-body size, with partial support for the hypothesis that children have looser representations than adults.
Collapse
|
48
|
Martolini C, Amadeo MB, Campus C, Cappagli G, Gori M. Effects of audio-motor training on spatial representations in long-term late blindness. Neuropsychologia 2022; 176:108391. [DOI: 10.1016/j.neuropsychologia.2022.108391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 08/16/2022] [Accepted: 10/01/2022] [Indexed: 11/15/2022]
|
49
|
Deaf individuals use compensatory strategies to estimate visual time events. Brain Res 2022; 1798:148148. [DOI: 10.1016/j.brainres.2022.148148] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 10/27/2022] [Accepted: 10/28/2022] [Indexed: 11/08/2022]
|
50
|
Senna I, Piller S, Gori M, Ernst M. The power of vision: calibration of auditory space after sight restoration from congenital cataracts. Proc Biol Sci 2022; 289:20220768. [PMID: 36196538 PMCID: PMC9532985 DOI: 10.1098/rspb.2022.0768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 09/12/2022] [Indexed: 11/12/2022] Open
Abstract
Early visual deprivation typically results in spatial impairments in other sensory modalities. It has been suggested that, since vision provides the most accurate spatial information, it is used for calibrating space in the other senses. Here we investigated whether sight restoration after prolonged early onset visual impairment can lead to the development of more accurate auditory space perception. We tested participants who were surgically treated for congenital dense bilateral cataracts several years after birth. In Experiment 1 we assessed participants' ability to understand spatial relationships among sounds, by asking them to spatially bisect three consecutive, laterally separated sounds. Participants performed better after surgery than participants tested before. However, they still performed worse than sighted controls. In Experiment 2, we demonstrated that single sound localization in the two-dimensional frontal plane improves quickly after surgery, approaching performance levels of sighted controls. Such recovery seems to be mediated by visual acuity, as participants gaining higher post-surgical visual acuity performed better in both experiments. These findings provide strong support for the hypothesis that vision calibrates auditory space perception. Importantly, this also demonstrates that this process can occur even when vision is restored after years of visual deprivation.
Collapse
Affiliation(s)
- Irene Senna
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, Ulm, Germany
| | - Sophia Piller
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, Ulm, Germany
| | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - Marc Ernst
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, Ulm, Germany
| |
Collapse
|