1
|
Adam KCS, Klatt LI, Miller JA, Rösner M, Fukuda K, Kiyonaga A. Beyond Routine Maintenance: Current Trends in Working Memory Research. J Cogn Neurosci 2025; 37:1035-1052. [PMID: 39792640 DOI: 10.1162/jocn_a_02298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2025]
Abstract
Working memory (WM) is an evolving concept. Our understanding of the neural functions that support WM develops iteratively alongside the approaches used to study it, and both can be profoundly shaped by available tools and prevailing theoretical paradigms. Here, the organizers of the 2024 Working Memory Symposium-inspired by this year's meeting-highlight current trends and looming questions in WM research. This review is organized into sections describing (1) ongoing efforts to characterize WM function across sensory modalities, (2) the growing appreciation that WM representations are malleable to context and future actions, (3) the enduring problem of how multiple WM items and features are structured and integrated, and (4) new insights about whether WM shares function with other cognitive processes that have conventionally been considered distinct. This review aims to chronicle where the field is headed and calls attention to issues that are paramount for future research.
Collapse
|
2
|
Malzacher A, Hilbig T, Pecka M, Ferreiro DN. Visual nudging of navigation strategies improves frequency discrimination during auditory-guided locomotion. Front Neurosci 2025; 19:1535759. [PMID: 40177372 PMCID: PMC11963732 DOI: 10.3389/fnins.2025.1535759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2024] [Accepted: 03/07/2025] [Indexed: 04/05/2025] Open
Abstract
Perception in natural environments requires integrating multisensory inputs while navigating our surroundings. During locomotion, sensory cues such as vision and audition change coherently, providing crucial environmental information. This integration may affect perceptual thresholds due to sensory interference. Vision often dominates in multimodal contexts, overshadowing auditory information and potentially degrading audition. While traditional laboratory experiments offer controlled insights into sensory integration, they often fail to replicate the dynamic, multisensory interactions of real-world behavior. We used a naturalistic paradigm in which participants navigate an arena searching for a target guided by position-dependent auditory cues. Previous findings showed that frequency discrimination thresholds during self-motion matched those in stationary paradigms, even though participants often relied on visually dominated navigation instead of auditory feedback. This suggested that vision might affect auditory perceptual thresholds in naturalistic settings. Here, we manipulated visual input to examine its effect on frequency discrimination and search strategy selection. By degrading visual input, we nudged participants' attention toward audition, leveraging subtle sensory adjustments to promote adaptive use of auditory cues without restricting their freedom of choice. Thus, this approach explores how attentional shifts influence multisensory integration during self-motion. Our results show that frequency discrimination thresholds improved by restricting visual input, suggesting that reducing visual interference can increase auditory sensitivity. This is consistent with adaptive behavioral theories, suggesting that individuals can dynamically adjust their perceptual strategies to leverage the most reliable sensory inputs. These findings contribute to a better understanding of multisensory integration, highlighting the flexibility of sensory systems in complex environments.
Collapse
Affiliation(s)
- Annalenia Malzacher
- Division of Neurobiology, Faculty of Biology, LMU Biocenter, Ludwig Maximilian University, Munich, Germany
- TUM School of Life Sciences, Technical University of Munich, Freising-Weihenstephan, Germany
| | - Tobias Hilbig
- TUM School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
- Department of Computer Science and Mathematics, Munich University of Applied Sciences, Munich, Germany
| | - Michael Pecka
- Division of Neurobiology, Faculty of Biology, LMU Biocenter, Ludwig Maximilian University, Munich, Germany
| | - Dardo N. Ferreiro
- Division of Neurobiology, Faculty of Biology, LMU Biocenter, Ludwig Maximilian University, Munich, Germany
- Department of General Psychology and Education, Ludwig Maximilian University, Munich, Germany
| |
Collapse
|
3
|
Décaillet M, Christensen AP, Besuchet L, Huguenin-Virchaux C, Fischer Fumeaux CJ, Denervaud S, Schneider J. Characterization of language abilities and semantic networks in very preterm children at school-age. PLoS One 2025; 20:e0317535. [PMID: 39879200 DOI: 10.1371/journal.pone.0317535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Accepted: 12/30/2024] [Indexed: 01/31/2025] Open
Abstract
It has been widely assessed that very preterm children (<32 weeks gestational age) present language and memory impairments compared with full-term children. However, differences in their underlying semantic memory structure have not been studied yet. Nevertheless, the way concepts are learned and organized across development relates to children's capacities in retrieving and using information later. Therefore, the semantic memory organization could underlie several cognitive deficits existing in very preterm children. Computational mathematical models offer the possibility to characterize semantic networks through three coefficients calculated on spoken language: average shortest path length (i.e., distance between concepts), clustering (i.e., local interconnectivity), and modularity (i.e., compartmentalization into small sub-networks). Here we assessed these coefficients in 38 very preterm schoolchildren (aged 8-10 years) compared with 38 full-term schoolchildren (aged 7-10 years) based on a verbal fluency task. Using semantic network analysis, very preterm children showed a longer distance between concepts and a lower interconnectivity at a local level than full-term children. In addition, we found a trend for a higher modularity at a global in very preterm children compared with full-term children. These findings provide preliminary evidence that very preterm children demonstrate subtle impairments in the organization of their semantic network, encouraging the adaptation of the support and education they receive.
Collapse
Affiliation(s)
- Marion Décaillet
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense, Innovation, and Research Center, Lausanne, Switzerland
| | - Alexander P Christensen
- Psychology and Human Development, Vanderbilt University, Nashville, TN, United States of America
| | - Laureline Besuchet
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense, Innovation, and Research Center, Lausanne, Switzerland
| | - Cléo Huguenin-Virchaux
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense, Innovation, and Research Center, Lausanne, Switzerland
| | - Céline J Fischer Fumeaux
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Solange Denervaud
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- CIBM Center for Biomedical Imaging, Lausanne, Switzerland
- MRI Animal Imaging and Technology, Polytechnical School of Lausanne, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland
| | - Juliane Schneider
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense, Innovation, and Research Center, Lausanne, Switzerland
| |
Collapse
|
4
|
Uluç I, Turpin T, Kotlarz P, Lankinen K, Mamashli F, Ahveninen J. Comparing auditory and visual aspects of multisensory working memory using bimodally matched feature patterns. Exp Brain Res 2024; 243:38. [PMID: 39738596 PMCID: PMC11848833 DOI: 10.1007/s00221-024-06991-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2024] [Accepted: 12/23/2024] [Indexed: 01/02/2025]
Abstract
Working memory (WM) reflects the transient maintenance of information in the absence of external input, which can be attained via multiple senses separately or simultaneously. Pertaining to WM, the prevailing literature suggests the dominance of vision over other sensory systems. However, this imbalance may be stemming from challenges in finding comparable stimuli across modalities. Here, we addressed this problem by using a balanced multisensory retro-cue WM design, which employed combinations of auditory (ripple sounds) and visuospatial (Gabor patches) patterns, adjusted relative to each participant's discrimination ability. In three separate experiments, the participant was asked to determine whether the (retro-cued) auditory and/or visual items maintained in WM matched or mismatched the subsequent probe stimulus. In Experiment 1, all stimuli were audiovisual, and the probes were either fully mismatching, only partially mismatching, or fully matching the memorized item. Experiment 2 was otherwise the same as Experiment 1, but the probes were unimodal. In Experiment 3, the participant was cued to maintain only the auditory or visual aspect of an audiovisual item pair. In Experiments 1 and 3, the participant's matching performance was significantly more accurate for the auditory than visual attributes of probes. When the perceptual and task demands are bimodally equated, auditory attributes can be matched to multisensory items in WM at least as accurately as, if not more precisely than, their visual counterparts.
Collapse
Affiliation(s)
- Işıl Uluç
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, CNY 149, 13th St, Charlestown, MA, 02129, USA.
- Department of Radiology, Harvard Medical School, Boston, MA, USA.
| | - Tori Turpin
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, CNY 149, 13th St, Charlestown, MA, 02129, USA
| | - Parker Kotlarz
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, CNY 149, 13th St, Charlestown, MA, 02129, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, CNY 149, 13th St, Charlestown, MA, 02129, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Fahimeh Mamashli
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, CNY 149, 13th St, Charlestown, MA, 02129, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, CNY 149, 13th St, Charlestown, MA, 02129, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
5
|
Allen RJ, Havelka J, Morey CC, Darling S. Hanging on the telephone: Maintaining visuospatial bootstrapping over time in working memory. Mem Cognit 2024; 52:1798-1815. [PMID: 37278958 PMCID: PMC11588821 DOI: 10.3758/s13421-023-01431-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/11/2023] [Indexed: 06/07/2023]
Abstract
Visuospatial bootstrapping (VSB) refers to the phenomenon in which performance on a verbal working memory task can be enhanced by presenting the verbal material within a familiar visuospatial configuration. This effect is part of a broader literature concerning how working memory is influenced by use of multimodal codes and contributions from long-term memory. The present study aimed to establish whether the VSB effect extends over a brief (5-s) delay period, and to explore the possible mechanisms operating during retention. The VSB effect, as indicated by a verbal recall advantage for digit sequences presented within a familiar visuospatial configuration (modelled on the T-9 keypad) relative to a single-location display, was observed across four experiments. The presence and size of this effect changed with the type of concurrent task activity applied during the delay. Articulatory suppression (Experiment 1) increased the visuospatial display advantage, while spatial tapping (Experiment 2) and a visuospatial judgment task (Experiment 3) both removed it. Finally, manipulation of the attentional demands placed by a verbal task also reduced (but did not abolish) this effect (Experiment 4). This pattern of findings demonstrates how provision of familiar visuospatial information at encoding can continue to support verbal working memory over time, with varying demands on modality-specific and general processing resources.
Collapse
Affiliation(s)
| | | | | | - Stephen Darling
- Division of Psychology, Sociology and Education, Queen Margaret University, Edinburgh, UK
| |
Collapse
|
6
|
Lambez B, Vakil E, Azouvi P, Vallat-Azouvi C. Working memory multicomponent model outcomes in individuals with traumatic brain injury: Critical review and meta-analysis. J Int Neuropsychol Soc 2024; 30:895-911. [PMID: 39523448 DOI: 10.1017/s1355617724000468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2024]
Abstract
OBJECTIVE Traumatic Brain Injury (TBI) often leads to cognitive impairments, particularly regarding working memory (WM). This meta-analysis aims to examine the impact of TBI on WM, taking into account moderating factors which has received little attention in previous research, such as severity of injury, the different domains of Baddeley's multi-component model, and the interaction between these two factors, as well as the interaction with other domains of executive functions. METHOD Following Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines, a systematic review and meta-analysis searched Google Scholar, PubMed, and PsycNET for studies with objective WM measures. Multiple meta-analyses were performed to compare the effects of TBI severity on different WM components. Twenty-four English, peer-reviewed articles, mostly cross-sectional were included. RESULTS TBI significantly impairs general WM and all Baddeley's model components, most notably the Central Executive (d' = 0.74). Severity categories, mild-moderate and moderate-severe, were identified. Impairment was found across severities, with "moderate-severe" demonstrating the largest effect size (d' = 0.81). Individuals with moderate-severe TBI showed greater impairments in the Central Executive and Episodic Buffer compared to those with mild-moderate injury, whereas no such differences were found for the Phonological Loop and Visuospatial Sketchpad. CONCLUSIONS These findings enhance our understanding of WM deficits in varying severities of TBI, highlighting the importance of assessing and treating WM in clinical practice and intervention planning.
Collapse
Affiliation(s)
- Bar Lambez
- Loewenstein Rehabilitation Center, Raanana, Israel
- Department of Psychology and Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, Israel
| | - Eli Vakil
- Department of Psychology and Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, Israel
| | - Philippe Azouvi
- AP-HP, GH Paris Saclay, Hôpital Raymond Poincaré, service de Médecine Physique et de Réadaptation, boulevard Raymond Poincaré, Garches, France
- Equipe INSERM DevPsy, CESP, UMR, Université Paris-Saclay, UVSQ, France
| | - Claire Vallat-Azouvi
- Laboratoire DysCo, University of Paris-8-Saint-Denis, 2, rue de la Liberté, Saint-Denis, France
- Antenne UEROS- UGECAMIDF, Raymond-Poincaré Hospital, 104, boulevard Raymond-Poincaré, Garches, France
| |
Collapse
|
7
|
Xu L, Cai B, Yue C, Wang A. Multisensory working memory capture of attention. Atten Percept Psychophys 2024; 86:2363-2373. [PMID: 39294324 DOI: 10.3758/s13414-024-02960-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/01/2024] [Indexed: 09/20/2024]
Abstract
During visual search, representations in working memory (WM) can guide the deployment of attention toward memory-matching visual input. Although previous studies have demonstrated that multisensory interactions facilitate WM and visual search, it remains unclear whether multisensory interaction influences attentional capture by WM. To address this issue, the present study adopted a dual-task paradigm, pairing a visual search task with a WM task, in which the memory modality was manipulated to be either visual or audiovisual. The results revealed that memory-driven attentional capture was observed under the visual and the audiovisual condition. Additionally, the capture effects and response time (RT) costs under the audiovisual condition were weaker than those under the visual condition, even on the trials with the earliest RTs. Furthermore, RT benefits under the audiovisual condition were comparable with those under the visual condition. These findings suggest that multisensory interactions can enhance cognitive control, leading to robust strategic effects and improved search performance. In this process, cognitive control tends to suppress the attentional capture by WM-matching distractors rather than enhance the attentional capture by WM-matching targets. The present study offers new insights into the influence of multisensory interactions on attentional capture by WM.
Collapse
Affiliation(s)
- Lei Xu
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Biye Cai
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
- School of Physical Education and Sports Science, Soochow University, Suzhou, China
| | - Chunlin Yue
- School of Physical Education and Sports Science, Soochow University, Suzhou, China.
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China.
| |
Collapse
|
8
|
Senkowski D, Engel AK. Multi-timescale neural dynamics for multisensory integration. Nat Rev Neurosci 2024; 25:625-642. [PMID: 39090214 DOI: 10.1038/s41583-024-00845-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/02/2024] [Indexed: 08/04/2024]
Abstract
Carrying out any everyday task, be it driving in traffic, conversing with friends or playing basketball, requires rapid selection, integration and segregation of stimuli from different sensory modalities. At present, even the most advanced artificial intelligence-based systems are unable to replicate the multisensory processes that the human brain routinely performs, but how neural circuits in the brain carry out these processes is still not well understood. In this Perspective, we discuss recent findings that shed fresh light on the oscillatory neural mechanisms that mediate multisensory integration (MI), including power modulations, phase resetting, phase-amplitude coupling and dynamic functional connectivity. We then consider studies that also suggest multi-timescale dynamics in intrinsic ongoing neural activity and during stimulus-driven bottom-up and cognitive top-down neural network processing in the context of MI. We propose a new concept of MI that emphasizes the critical role of neural dynamics at multiple timescales within and across brain networks, enabling the simultaneous integration, segregation, hierarchical structuring and selection of information in different time windows. To highlight predictions from our multi-timescale concept of MI, real-world scenarios in which multi-timescale processes may coordinate MI in a flexible and adaptive manner are considered.
Collapse
Affiliation(s)
- Daniel Senkowski
- Department of Psychiatry and Neurosciences, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.
| |
Collapse
|
9
|
Huang J, Wang A, Zhang M. The audiovisual competition effect induced by temporal asynchronous encoding weakened the visual dominance in working memory retrieval. Memory 2024; 32:1069-1082. [PMID: 39067050 DOI: 10.1080/09658211.2024.2381782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Accepted: 07/11/2024] [Indexed: 07/30/2024]
Abstract
Converging evidence suggests a facilitation effect of multisensory interactions on memory performance, reflected in higher accuracy or faster response time under a bimodal encoding condition than a unimodal condition. However, relatively little attention has been given to the effect of multisensory competition on memory. The present study adopted an adaptive staircase test to measure the point of subjective simultaneity (PSS), combined with a delayed matched-to-sample (DMS) task to probe the effect of audiovisual competition during the encoding stage on subsequent unisensory retrieval. The results showed that there was a robust visual dominance effect and multisensory interference effect in WM retrieval, regardless of the subjective synchronous or subjective asynchronous audiovisual presentation. However, a weakened visual dominance effect was observed when the auditory stimulus was presented before the visual stimulus in the encoding period, particularly in the semantically incongruent case. These findings revealed that the prior-entry of sensory information in the early perceptual stage could affect the processing in the late cognitive stage to some extent, and supported the evidence that there is a persistent advantage for visuospatial sketchpad in multisensory WM.
Collapse
Affiliation(s)
- Jie Huang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China
| | - Ming Zhang
- School of Psychology, Northeast Normal University, Changchun, People's Republic of China
- Department of Psychology, Suzhou University of Science and Technology, Suzhou, People's Republic of China
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| |
Collapse
|
10
|
Décaillet M, Denervaud S, Huguenin-Virchaux C, Besuchet L, Fischer Fumeaux CJ, Murray MM, Schneider J. The impact of premature birth on auditory-visual processes in very preterm schoolchildren. NPJ SCIENCE OF LEARNING 2024; 9:42. [PMID: 38971881 PMCID: PMC11227572 DOI: 10.1038/s41539-024-00257-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 06/19/2024] [Indexed: 07/08/2024]
Abstract
Interactions between stimuli from different sensory modalities and their integration are central to daily life, contributing to improved perception. Being born prematurely and the subsequent hospitalization can have an impact not only on sensory processes, but also on the manner in which information from different senses is combined-i.e., multisensory processes. Very preterm (VPT) children (<32 weeks gestational age) present impaired multisensory processes in early childhood persisting at least through the age of five. However, it remains largely unknown whether and how these consequences persist into later childhood. Here, we evaluated the integrity of auditory-visual multisensory processes in VPT schoolchildren. VPT children (N = 28; aged 8-10 years) received a standardized cognitive assessment and performed a simple detection task at their routine follow-up appointment. The simple detection task involved pressing a button as quickly as possible upon presentation of an auditory, visual, or simultaneous audio-visual stimulus. Compared to full-term (FT) children (N = 23; aged 6-11 years), reaction times of VPT children were generally slower and more variable, regardless of sensory modality. Nonetheless, both groups exhibited multisensory facilitation on mean reaction times and inter-quartile ranges. There was no evidence that standardized cognitive or clinical measures correlated with multisensory gains of VPT children. However, while gains in FT children exceeded predictions based on probability summation and thus forcibly invoked integrative processes, this was not the case for VPT children. Our findings provide evidence of atypical multisensory profiles in VPT children persisting into school-age. These results could help in targeting supportive interventions for this vulnerable population.
Collapse
Affiliation(s)
- Marion Décaillet
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland.
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
| | - Solange Denervaud
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Cléo Huguenin-Virchaux
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Laureline Besuchet
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Céline J Fischer Fumeaux
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Micah M Murray
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
| | - Juliane Schneider
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
11
|
Cai B, Tang X, Wang A, Zhang M. Semantically congruent bimodal presentation modulates cognitive control over attentional guidance by working memory. Mem Cognit 2024; 52:1065-1078. [PMID: 38308161 DOI: 10.3758/s13421-024-01521-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/13/2024] [Indexed: 02/04/2024]
Abstract
Although previous studies have well established that audiovisual enhancement has a promoting effect on working memory and selective attention, there remains an open question about the influence of audiovisual enhancement on attentional guidance by working memory. To address this issue, the present study adopted a dual-task paradigm that combines a working memory task and a visual search task, in which the content of working memory was presented in audiovisual or visual modalities. Given the importance of search speed in memory-driven attentional suppression, we divided participants into two groups based on their reaction time (RT) in neutral trials and examined whether audiovisual enhancement in attentional suppression was modulated by search speed. The results showed that the slow search group exhibited a robust memory-driven attentional suppression effect, and the suppression effect started earlier and its magnitude was greater in the audiovisual condition than in the visual-only condition. However, among the faster search group, the suppression effect only occurred in the trials with longer RTs in the visual-only condition, and its temporal dynamics were selectively improved in the audiovisual condition. Furthermore, audiovisual enhancement of memory-driven attention evolved over time. These findings suggest that semantically congruent bimodal presentation can progressively facilitate the strength and temporal dynamics of memory-driven attentional suppression, and that search speed plays an important role in this process. This may be due to a synergistic effect between multisensory working memory representation and top-down suppression mechanism. The present study demonstrates the flexible role of audiovisual enhancement on cognitive control over memory-driven attention.
Collapse
Affiliation(s)
- Biye Cai
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| |
Collapse
|
12
|
Fan M, Wong TWL. The effects of errorless psychomotor training in the Y balance lower limb reaching task. PSYCHOLOGICAL RESEARCH 2024; 88:156-166. [PMID: 37353612 DOI: 10.1007/s00426-023-01831-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 04/21/2023] [Indexed: 06/25/2023]
Abstract
This study investigated the training effect of errorless psychomotor training, a motor training method with perceptual, attentional, and psychological manipulation, in a balance-related, lower limb reaching task (Y balance reaching task) on dynamic balance by young adults. Thirty-nine participants (Mean age = 27.03 years, SD = 2.64 years) were trained with different psychomotor training methods in the Y balance reaching task. Results illustrate that errorless psychomotor training significantly improved the participants' dynamic balance and proprioceptive abilities. Additionally, gaze fixation duration on target during reaching decreased after errorless psychomotor training, suggesting that errorless psychomotor training could decrease visual information demand and be concurrently compensated by up-weighting on proprioception. This multisensory reweighting and cross-modal attention could contribute to the improvement of dynamic balance ability in sports.
Collapse
Affiliation(s)
- Mengjiao Fan
- School of Public Health, Li Ka Shing Faculty of Medicine, The Hong Kong Jockey Club Building for Interdisciplinary Research, The University of Hong Kong, 3/F, 5 Sassoon Road, Pokfulam, Hong Kong SAR, China
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Thomson W L Wong
- Department of Rehabilitation Sciences, Faculty of Health and Social Sciences, The Hong Kong Polytechnic University, Hong Kong SAR, China.
| |
Collapse
|
13
|
Haraguchi F, Hisakata R, Kaneko H. Temporal integration characteristics of an image defined by binocular disparity cues. Iperception 2024; 15:20416695231224138. [PMID: 38204517 PMCID: PMC10777792 DOI: 10.1177/20416695231224138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 12/16/2023] [Indexed: 01/12/2024] Open
Abstract
We can correctly recognize the content of an image by presenting all of the elements within a limited time, such as in a slit view or a divided painting image. It is important to clarify how temporally divided information is integrated and perceived to understand the temporal properties of the information-processing mechanism of visual systems. Previous studies related to this topic have often used two-dimensional pictorial stimuli; however, few have considered the temporal integration of binocular disparity for the recognition of objects defined with disparity. In this study, we examined image recognition properties based on the temporal integration of binocular disparity, by comparing that based on the temporal integration of luminance. The effect of element onset asynchrony (the time lag among presented elements) was somewhat similar between disparity and luminance with respect to randomly divided elements. On the other hand, under slit-vision conditions, the tolerance range of spatiotemporal integration for luminance stimuli was much wider than that for disparity stimuli. These results indicate that the temporal integration mechanism in localized areas is common to disparity and luminance, but that for global motion shows differences between the two mechanisms. Thus, we conclude that global motion has little contribution to the temporal integration of binocular disparity information for image recognition.
Collapse
Affiliation(s)
- Fumiya Haraguchi
- Department of Information and Communications Engineering, Tokyo Institute of Technology, Yokohama, Japan
| | - Rumi Hisakata
- Department of Information and Communications Engineering, Tokyo Institute of Technology, Yokohama, Japan
| | - Hirohiko Kaneko
- Department of Information and Communications Engineering, Tokyo Institute of Technology, Yokohama, Japan
| |
Collapse
|
14
|
Turpin T, Uluç I, Kotlarz P, Lankinen K, Mamashli F, Ahveninen J. Comparing auditory and visual aspects of multisensory working memory using bimodally matched feature patterns. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.03.551865. [PMID: 37577481 PMCID: PMC10418174 DOI: 10.1101/2023.08.03.551865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Working memory (WM) reflects the transient maintenance of information in the absence of external input, which can be attained via multiple senses separately or simultaneously. Pertaining to WM, the prevailing literature suggests the dominance of vision over other sensory systems. However, this imbalance may be stemming from challenges in finding comparable stimuli across modalities. Here, we addressed this problem by using a balanced multisensory retro-cue WM design, which employed combinations of auditory (ripple sounds) and visuospatial (Gabor patches) patterns, adjusted relative to each participant's discrimination ability. In three separate experiments, the participant was asked to determine whether the (retro-cued) auditory and/or visual items maintained in WM matched or mismatched the subsequent probe stimulus. In Experiment 1, all stimuli were audiovisual, and the probes were either fully mismatching, only partially mismatching, or fully matching the memorized item. Experiment 2 was otherwise same as Experiment 1, but the probes were unimodal. In Experiment 3, the participant was cued to maintain only the auditory or visual aspect of an audiovisual item pair. In two of the three experiments, the participant matching performance was significantly more accurate for the auditory than visual attributes of probes. When the perceptual and task demands are bimodally equated, auditory attributes can be matched to multisensory items in WM at least as accurately as, if not more precisely than, their visual counterparts.
Collapse
Affiliation(s)
- Tori Turpin
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - Işıl Uluç
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Parker Kotlarz
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Fahimeh Mamashli
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
15
|
Atkin C, Stacey JE, Roberts KL, Allen HA, Henshaw H, Badham SP. The effect of unisensory and multisensory information on lexical decision and free recall in young and older adults. Sci Rep 2023; 13:16575. [PMID: 37789029 PMCID: PMC10547689 DOI: 10.1038/s41598-023-41791-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 08/31/2023] [Indexed: 10/05/2023] Open
Abstract
Studies using simple low-level stimuli show that multisensory stimuli lead to greater improvements in processing speed for older adults than young adults. However, there is insufficient evidence to explain how these benefits influence performance for more complex processes such as judgement and memory tasks. This study examined how presenting stimuli in multiple sensory modalities (audio-visual) instead of one (audio-only or visual-only) may help older adults to improve their memory and cognitive processing compared to young adults. Young and older adults completed lexical decision (real word vs. pseudoword judgement) and word recall tasks, either independently, or in combination (dual-task), with and without perceptual noise. Older adults were better able to remember words when encoding independently. In contrast, young adults were better able to remember words when encoding in combination with lexical decisions. Both young and older adults had better word recall in the audio-visual condition compared with the audio-only condition. The findings indicate significant age differences when dealing with multiple tasks during encoding. Crucially, there is no greater multisensory benefit for older adults compared to young adults in more complex processes, rather multisensory stimuli can be useful in enhancing cognitive performance for both young and older adults.
Collapse
Affiliation(s)
| | | | | | - Harriet A Allen
- School of Psychology, University of Nottingham, Nottingham, UK
| | - Helen Henshaw
- Hearing Sciences, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK
- National Institute for Health and Care Research (NIHR), Nottingham Biomedical Research Centre, Nottingham, UK
| | | |
Collapse
|
16
|
Cervantes Constantino F, Sánchez-Costa T, Cipriani GA, Carboni A. Visuospatial attention revamps cortical processing of sound amid audiovisual uncertainty. Psychophysiology 2023; 60:e14329. [PMID: 37166096 DOI: 10.1111/psyp.14329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 04/13/2023] [Accepted: 04/25/2023] [Indexed: 05/12/2023]
Abstract
Selective attentional biases arising from one sensory modality manifest in others. The effects of visuospatial attention, important in visual object perception, are unclear in the auditory domain during audiovisual (AV) scene processing. We investigate temporal and spatial factors that underlie such transfer neurally. Auditory encoding of random tone pips in AV scenes was addressed via a temporal response function model (TRF) of participants' electroencephalogram (N = 30). The spatially uninformative pips were associated with spatially distributed visual contrast reversals ("flips"), through asynchronous probabilistic AV temporal onset distributions. Participants deployed visuospatial selection on these AV stimuli to perform a task. A late (~300 ms) cross-modal influence over the neural representation of pips was found in the original and a replication study (N = 21). Transfer depended on selected visual input being (i) presented during or shortly after a related sound, in relatively limited temporal distributions (<165 ms); (ii) positioned across limited (1:4) visual foreground to background ratios. Neural encoding of auditory input, as a function of visual input, was largest at visual foreground quadrant sectors and lowest at locations opposite to the target. The results indicate that ongoing neural representations of sounds incorporate visuospatial attributes for auditory stream segregation, as cross-modal transfer conveys information that specifies the identity of multisensory signals. A potential mechanism is by enhancing or recalibrating the tuning properties of the auditory populations that represent them as objects. The results account for the dynamic evolution under visual attention of multisensory integration, specifying critical latencies at which relevant cortical networks operate.
Collapse
Affiliation(s)
- Francisco Cervantes Constantino
- Centro de Investigación Básica en Psicología, Facultad de Psicología, Universidad de la República, Montevideo, Uruguay
- Instituto de Fundamentos y Métodos en Psicología, Facultad de Psicología, Universidad de la República, Montevideo, Uruguay
- Instituto de Investigaciones Biológicas "Clemente Estable", Montevideo, Uruguay
| | - Thaiz Sánchez-Costa
- Centro de Investigación Básica en Psicología, Facultad de Psicología, Universidad de la República, Montevideo, Uruguay
| | - Germán A Cipriani
- Centro de Investigación Básica en Psicología, Facultad de Psicología, Universidad de la República, Montevideo, Uruguay
| | - Alejandra Carboni
- Centro de Investigación Básica en Psicología, Facultad de Psicología, Universidad de la República, Montevideo, Uruguay
- Instituto de Fundamentos y Métodos en Psicología, Facultad de Psicología, Universidad de la República, Montevideo, Uruguay
| |
Collapse
|
17
|
Maezawa T, Kawahara JI. Processing symmetry between visual and auditory spatial representations in updating working memory. Q J Exp Psychol (Hove) 2023; 76:672-704. [PMID: 35570663 DOI: 10.1177/17470218221103253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Updating spatial representations in visual and auditory working memory relies on common processes, and the modalities should compete for attentional resources. If competition occurs, one type of spatial information is presumably weighted over the other, irrespective of sensory modality. This study used incompatible spatial information conveyed from two different cue modalities to examine relative dominance in memory updating. Participants mentally manoeuvred a designated target in a matrix according to visual or auditory stimuli that were presented simultaneously, to identify a terminal location. Prior to the navigation task, the relative perceptual saliences of the visual cues were manipulated to be equal, superior, or inferior to the auditory cues. The results demonstrate that visual and auditory information competed for attentional resources, such that visual/auditory guidance was impaired by incongruent cues delivered from the other modality. Although visual bias was generally observed in working-memory navigation, stimuli of relatively high salience interfered with or facilitated other stimuli regardless of modality, demonstrating the processing symmetry of spatial updating in visual and auditory spatial working memory. Furthermore, this processing symmetry can be identified during the encoding of sensory inputs into working-memory representations. The results imply that auditory spatial updating is comparable to visual spatial updating in that salient stimuli receive a high priority when selecting inputs and are used when tracking spatial representations.
Collapse
Affiliation(s)
- Tomoki Maezawa
- Department of Psychology, Hokkaido University, Sapporo, Japan
| | - Jun I Kawahara
- Department of Psychology, Hokkaido University, Sapporo, Japan
| |
Collapse
|
18
|
Alashram AR, Annino G. A Novel Neurorehabilitation Approach for Neural Plasticity
Overstimulation and Reorganization in Patients with Neurological
Disorders. PHYSIKALISCHE MEDIZIN, REHABILITATIONSMEDIZIN, KURORTMEDIZIN 2023. [DOI: 10.1055/a-2004-5836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
Abstract
AbstractNeurological disorders are those that are associated with impairments in the
nervous system. These impairments affect the patient’s activities of
daily living. Recently, many advanced modalities have been used in the
rehabilitation field to treat various neurological impairments. However, many of
these modalities are available only in clinics, and some are expensive. Most
patients with neurological disorders have difficulty reaching clinics. This
review was designed to establish a new neurorehabilitation approach based on the
scientific way to improve patients’ functional recovery following
neurological disorders in clinics or at home. The human brain is a network, an
intricate, integrated system that coordinates operations among billions of
units. In fact, grey matter contains most of the neuronal cell bodies. It
includes the brain and the spinal cord areas involved in muscle control, sensory
perception, memory, emotions, decision-making, and self-control. Consequently,
patients’ functional ability results from complex interactions among
various brain and spinal cord areas and neuromuscular systems. While white
matter fibers connect numerous brain areas, stimulating or improving non-motor
symptoms, such as motivation, cognitive, and sensory symptoms besides motor
symptoms may enhance functional recovery in patients with neurological
disorders. The basic principles of the current treatment approach are
established based on brain connectivity. Using motor, sensory, motivation, and
cognitive (MSMC) interventions during rehabilitation may promote neural
plasticity and maximize functional recovery in patients with neurological
disorders. Experimental studies are strongly needed to verify our theories and
hypothesis.
Collapse
Affiliation(s)
- Anas R. Alashram
- Department of Physiotherapy, Middle East University, Amman,
Jordan
- Applied Science Research Center, Applied Science Private
University
| | - Giuseppe Annino
- Department of Medicine Systems, University of Rome “Tor
Vergata”, Rome, Italy
| |
Collapse
|
19
|
Jackson KM, Shaw TH, Helton WS. Evaluating the dual-task decrement within a simulated environment: Word recall and visual search. APPLIED ERGONOMICS 2023; 106:103861. [PMID: 35998391 DOI: 10.1016/j.apergo.2022.103861] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 07/19/2022] [Accepted: 07/27/2022] [Indexed: 06/15/2023]
Abstract
Simulated environments have become better able to replicate the real world and can be used for a variety of purposes, such as testing new technology without any of the costs or risks associated with working in the real world. Because of this, it is now possible to gain a better understanding of cognitive demands when working in operational environments, where individuals are often required to multitask. Multitasking often results in performance decrements, where adding more tasks can cause a decrease in performance in each of the individual tasks. However, little research investigated multitasking performance in simulated environments. In the current study we examined how multitasking affects performance in simulated environments. Forty-eight participants performed a dual visual search and word memory task where participants were navigated through a simulated environment while being presented with words. Performance was then compared to single-task performance (visual search and word memory alone). Results showed that participants experienced significant dual-task interference when comparing the dual-tasks to the single-tasks and subjective measures confirmed these findings. These results could provide useful insight for the design of technology in operational environments, but also serve as an evaluation of MRT in simulated environments.
Collapse
Affiliation(s)
- Kenneth M Jackson
- Department of Psychology, George Mason University, Fairfax, VA, USA.
| | - Tyler H Shaw
- Department of Psychology, George Mason University, Fairfax, VA, USA
| | - William S Helton
- Department of Psychology, George Mason University, Fairfax, VA, USA
| |
Collapse
|
20
|
Long-term memory representations for audio-visual scenes. Mem Cognit 2023; 51:349-370. [PMID: 36100821 PMCID: PMC9950240 DOI: 10.3758/s13421-022-01355-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2022] [Indexed: 11/08/2022]
Abstract
In this study, we investigated the nature of long-term memory representations for naturalistic audio-visual scenes. Whereas previous research has shown that audio-visual scenes are recognized more accurately than their unimodal counterparts, it remains unclear whether this benefit stems from audio-visually integrated long-term memory representations or a summation of independent retrieval cues. We tested two predictions for audio-visually integrated memory representations. First, we used a modeling approach to test whether recognition performance for audio-visual scenes is more accurate than would be expected from independent retrieval cues. This analysis shows that audio-visual integration is not necessary to explain the benefit of audio-visual scenes relative to purely auditory or purely visual scenes. Second, we report a series of experiments investigating the occurrence of study-test congruency effects for unimodal and audio-visual scenes. Most importantly, visually encoded information was immune to additional auditory information presented during testing, whereas auditory encoded information was susceptible to additional visual information presented during testing. This renders a true integration of visual and auditory information in long-term memory representations unlikely. In sum, our results instead provide evidence for visual dominance in long-term memory. Whereas associative auditory information is capable of enhancing memory performance, the long-term memory representations appear to be primarily visual.
Collapse
|
21
|
Fasilis T, Patrikelis P, Messinis L, Kimiskidis V, Korfias S, Nasios G, Alexoudi A, Verentzioti A, Dardiotis E, Gatzonis S. Cognitive Neurorehabilitation in Epilepsy Patients via Virtual Reality Environments: Systematic Review. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2023; 1424:135-144. [PMID: 37486487 DOI: 10.1007/978-3-031-31982-2_14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/25/2023]
Abstract
OBJECTIVE Epilepsy patients could possibly benefit from the remuneration observed in the use of virtual reality (VR) and virtual environments (VEs), especially in cognitive difficulties associated with visuospatial navigation (memory, attention, and processing speed). AIM Research questions under consideration in the present systematic review are associated to VEs' efficiency as a cognitive rehabilitation practice in epilepsy and the particular VR methods indicated for epilepsy patients. To meet criteria, studies included participants suffering from any form of epilepsy and a methodological design with a structured rehabilitation program/model. Data were collected online, using academic databases. RESULTS Fourteen studies were included in the literature review and 6 in the statistical analysis. ROBINS-I protocol was implemented to assess the risk of bias. An inverse variance analysis (random effects) of pooled estimates of differences was implemented, in the form of continuous data. Despite the heterogeneity of the studies, all of them agree on the beneficial aspects of VR and VEs in cognitive rehabilitation in relation to visuospatial memory, attention, and information processing speed. CONCLUSION We suggest that patients suffering from epilepsy may benefit from the use of VR cognitive rehabilitation interventions, concerning visuospatial memory, attention, and information processing speed. However, further investigation is needed in order to gain a better understanding of the mechanisms involved in cognitive rehabilitation via VEs and establish efficient and dynamic rehabilitation protocols.
Collapse
Affiliation(s)
- Theodoros Fasilis
- 1st Department of Neurosurgery, Clinical Neuropsychology Laboratory, School of Medicine, Faculty of Health Sciences, National & Kapodistrian University of Athens, Athens, Greece
| | - Panayiotis Patrikelis
- 1st Department of Neurosurgery, Clinical Neuropsychology Laboratory, School of Medicine, Faculty of Health Sciences, National & Kapodistrian University of Athens, Athens, Greece
- Laboratory of Cognitive Neuroscience, School of Psychology, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Lambros Messinis
- Laboratory of Cognitive Neuroscience, School of Psychology, Aristotle University of Thessaloniki, Thessaloniki, Greece.
| | - Vasileios Kimiskidis
- 1st Department of Neurology, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Stefanos Korfias
- 1st Department of Neurosurgery, Clinical Neuropsychology Laboratory, School of Medicine, Faculty of Health Sciences, National & Kapodistrian University of Athens, Athens, Greece
| | - Grigorios Nasios
- Department of Speech and Language Therapy, School of Health Sciences, University of Ioannina, Ioannina, Greece
| | - Athanasia Alexoudi
- 1st Department of Neurosurgery, Clinical Neuropsychology Laboratory, School of Medicine, Faculty of Health Sciences, National & Kapodistrian University of Athens, Athens, Greece
| | - Anastasia Verentzioti
- 1st Department of Neurosurgery, Clinical Neuropsychology Laboratory, School of Medicine, Faculty of Health Sciences, National & Kapodistrian University of Athens, Athens, Greece
| | - Efthimios Dardiotis
- Department of Neurology, Faculty of Medicine, University of Thessaly, Volos, Greece
| | - Stylianos Gatzonis
- 1st Department of Neurosurgery, Clinical Neuropsychology Laboratory, School of Medicine, Faculty of Health Sciences, National & Kapodistrian University of Athens, Athens, Greece
| |
Collapse
|
22
|
Shvadron S, Snir A, Maimon A, Yizhar O, Harel S, Poradosu K, Amedi A. Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device. Front Hum Neurosci 2023; 17:1058617. [PMID: 36936618 PMCID: PMC10017858 DOI: 10.3389/fnhum.2023.1058617] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 01/09/2023] [Indexed: 03/06/2023] Open
Abstract
Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes' identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.
Collapse
Affiliation(s)
- Shira Shvadron
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- *Correspondence: Shira Shvadron,
| | - Adi Snir
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Amber Maimon
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Or Yizhar
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development, Berlin, Germany
- Max Planck Dahlem Campus of Cognition (MPDCC), Max Planck Institute for Human Development, Berlin, Germany
| | - Sapir Harel
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Keinan Poradosu
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- Weizmann Institute of Science, Rehovot, Israel
| | - Amir Amedi
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
23
|
Alomar AZ. A structured multimodal teaching approach enhancing musculoskeletal physical examination skills among undergraduate medical students. MEDICAL EDUCATION ONLINE 2022; 27:2114134. [PMID: 35993497 PMCID: PMC9466621 DOI: 10.1080/10872981.2022.2114134] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 08/05/2022] [Accepted: 08/12/2022] [Indexed: 06/15/2023]
Abstract
Current evidence indicates that undergraduate medical students display deficits in musculoskeletal physical examination skills (MPES). While various instructional methods are recommended for teaching clinical skills, effective methods for teaching MPES have not been established. This study compared the effectiveness of a multimodal teaching approach incorporating video-based learning, interactive small-group teaching, hands-on practicing, peer-assisted learning, formative assessment, and constructive feedback with traditional bedside teaching in developing undergraduate orthopedic MPES. Participants were 151 fifth-year medical students divided into two groups. One group received multimodal teaching, and the other received traditional bedside teaching. In both groups, the participants learned how to physically examine the knee and shoulder. The primary outcome was objective structured clinical examination (OSCE) scores, while the secondary outcomes included teaching sessions' total durations, facilitator's demonstration time, participants' practice time, and proportion of students with passing checklist scores and global ratings-based assessments for the two teaching approaches. The multimodal teaching group had significantly higher OSCE scores (checklist scores, global ratings, and passing rates; p = 0.02, 0.02, 0.01, respectively) than the comparison group. Individual OSCE component assessments showed significant improvements in the special musculoskeletal physical examination test. The overall duration and amount of participants' hands-on time were significantly longer for the multimodal than for the traditional bedside teaching group (p = 0.01 and 0.01, respectively), and the facilitator's demonstration time was significantly shorter (p = 0.01). The multimodal learner-centered teaching approach evaluated in this study was effective for teaching MPES. It appeared to maximize learner engagement through enhancing interactions and providing increased time to engage in hands-on practice. This teaching approach improved MPES levels, maximized teaching efficiency for scenarios with limited instruction time and resources, and enhanced competency of undergraduate medical students in performing special musculoskeletal physical examinations compared to traditional bedside teaching.
Collapse
Affiliation(s)
- Abdulaziz Z. Alomar
- Division of Arthroscopy & Sports Medicine, Department of Orthopaedic Surgery, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
24
|
Pecher D, Zeelenberg R. Does multisensory study benefit memory for pictures and sounds? Cognition 2022; 226:105181. [DOI: 10.1016/j.cognition.2022.105181] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 05/16/2022] [Accepted: 05/23/2022] [Indexed: 11/03/2022]
|
25
|
De Winne J, Devos P, Leman M, Botteldooren D. With No Attention Specifically Directed to It, Rhythmic Sound Does Not Automatically Facilitate Visual Task Performance. Front Psychol 2022; 13:894366. [PMID: 35756201 PMCID: PMC9226390 DOI: 10.3389/fpsyg.2022.894366] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/19/2022] [Indexed: 11/22/2022] Open
Abstract
In a century where humans and machines—powered by artificial intelligence or not—increasingly work together, it is of interest to understand human processing of multi-sensory stimuli in relation to attention and working memory. This paper explores whether and when supporting visual information with rhythmic auditory stimuli can optimize multi-sensory information processing. In turn, this can make the interaction between humans or between machines and humans more engaging, rewarding and activating. For this purpose a novel working memory paradigm was developed where participants are presented with a series of five target digits randomly interchanged with five distractor digits. Their goal is to remember the target digits and recall them orally. Depending on the condition support is provided by audio and/or rhythm. It is expected that the sound will lead to a better performance. It is also expected that this effect of sound is different in case of rhythmic and non-rhythmic sound. Last but not least, some variability is expected across participants. To make correct conclusions, the data of the experiment was statistically analyzed in a classic way, but also predictive models were developed in order to predict outcomes based on a range of input variables related to the experiment and the participant. The effect of auditory support could be confirmed, but no difference was observed between rhythmic and non-rhythmic sounds. Overall performance was indeed affected by individual differences, such as visual dominance or perceived task difficulty. Surprisingly a music education did not significantly affect the performance and even tended toward a negative effect. To better understand the underlying processes of attention, also brain activation data, e.g., by means of electroencephalography (EEG), should be recorded. This approach can be subject to a future work.
Collapse
Affiliation(s)
- Jorg De Winne
- Department of Information Technology, WAVES, Ghent University, Ghent, Belgium.,Department of Art, Music and Theater Studies, Institute for Psychoacoustics and Electronic Music (IPEM), Ghent University, Ghent, Belgium
| | - Paul Devos
- Department of Information Technology, WAVES, Ghent University, Ghent, Belgium
| | - Marc Leman
- Department of Art, Music and Theater Studies, Institute for Psychoacoustics and Electronic Music (IPEM), Ghent University, Ghent, Belgium
| | - Dick Botteldooren
- Department of Information Technology, WAVES, Ghent University, Ghent, Belgium
| |
Collapse
|
26
|
Liu Q, Ulloa A, Horwitz B. The Spatiotemporal Neural Dynamics of Intersensory Attention Capture of Salient Stimuli: A Large-Scale Auditory-Visual Modeling Study. Front Comput Neurosci 2022; 16:876652. [PMID: 35645750 PMCID: PMC9133449 DOI: 10.3389/fncom.2022.876652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 04/04/2022] [Indexed: 11/13/2022] Open
Abstract
The spatiotemporal dynamics of the neural mechanisms underlying endogenous (top-down) and exogenous (bottom-up) attention, and how attention is controlled or allocated in intersensory perception are not fully understood. We investigated these issues using a biologically realistic large-scale neural network model of visual-auditory object processing of short-term memory. We modeled and incorporated into our visual-auditory object-processing model the temporally changing neuronal mechanisms for the control of endogenous and exogenous attention. The model successfully performed various bimodal working memory tasks, and produced simulated behavioral and neural results that are consistent with experimental findings. Simulated fMRI data were generated that constitute predictions that human experiments could test. Furthermore, in our visual-auditory bimodality simulations, we found that increased working memory load in one modality would reduce the distraction from the other modality, and a possible network mediating this effect is proposed based on our model.
Collapse
Affiliation(s)
- Qin Liu
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD, United States
- Department of Physics, University of Maryland, College Park, College Park, MD, United States
| | - Antonio Ulloa
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD, United States
- Center for Information Technology, National Institutes of Health, Bethesda, MD, United States
| | - Barry Horwitz
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD, United States
- *Correspondence: Barry Horwitz,
| |
Collapse
|
27
|
Cui J, Sawamura D, Sakuraba S, Saito R, Tanabe Y, Miura H, Sugi M, Yoshida K, Watanabe A, Tokikuni Y, Yoshida S, Sakai S. Effect of Audiovisual Cross-Modal Conflict during Working Memory Tasks: A Near-Infrared Spectroscopy Study. Brain Sci 2022; 12:brainsci12030349. [PMID: 35326305 PMCID: PMC8946709 DOI: 10.3390/brainsci12030349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 03/01/2022] [Accepted: 03/01/2022] [Indexed: 12/04/2022] Open
Abstract
Cognitive conflict effects are well characterized within unimodality. However, little is known about cross-modal conflicts and their neural bases. This study characterizes the two types of visual and auditory cross-modal conflicts through working memory tasks and brain activities. The participants consisted of 31 healthy, right-handed, young male adults. The Paced Auditory Serial Addition Test (PASAT) and the Paced Visual Serial Addition Test (PVSAT) were performed under distractor and no distractor conditions. Distractor conditions comprised two conditions in which either the PASAT or PVSAT was the target task, and the other was used as a distractor stimulus. Additionally, oxygenated hemoglobin (Oxy-Hb) concentration changes in the frontoparietal regions were measured during tasks. The results showed significantly lower PASAT performance under distractor conditions than under no distractor conditions, but not in the PVSAT. Oxy-Hb changes in the bilateral ventrolateral prefrontal cortex (VLPFC) and inferior parietal cortex (IPC) significantly increased in the PASAT with distractor compared with no distractor conditions, but not in the PVSAT. Furthermore, there were significant positive correlations between Δtask performance accuracy and ΔOxy-Hb in the bilateral IPC only in the PASAT. Visual cross-modal conflict significantly impairs auditory task performance, and bilateral VLPFC and IPC are key regions in inhibiting visual cross-modal distractors.
Collapse
Affiliation(s)
- Jiahong Cui
- Graduate School of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan; (J.C.); (R.S.); (H.M.); (A.W.); (Y.T.)
| | - Daisuke Sawamura
- Department of Rehabilitation Science, Faculty of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan; (K.Y.); (S.S.)
- Correspondence:
| | - Satoshi Sakuraba
- Department of Rehabilitation Sciences, Health Sciences University of Hokkaido, Sapporo 061-0293, Japan; (S.S.); (S.Y.)
| | - Ryuji Saito
- Graduate School of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan; (J.C.); (R.S.); (H.M.); (A.W.); (Y.T.)
| | - Yoshinobu Tanabe
- Department of Rehabilitation, Shinsapporo Paulo Hospital, Sapporo 004-0002, Japan;
| | - Hiroshi Miura
- Graduate School of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan; (J.C.); (R.S.); (H.M.); (A.W.); (Y.T.)
| | - Masaaki Sugi
- Department of Rehabilitation, Tokeidai Memorial Hospital, Sapporo 060-0031, Japan;
| | - Kazuki Yoshida
- Department of Rehabilitation Science, Faculty of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan; (K.Y.); (S.S.)
| | - Akihiro Watanabe
- Graduate School of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan; (J.C.); (R.S.); (H.M.); (A.W.); (Y.T.)
| | - Yukina Tokikuni
- Graduate School of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan; (J.C.); (R.S.); (H.M.); (A.W.); (Y.T.)
| | - Susumu Yoshida
- Department of Rehabilitation Sciences, Health Sciences University of Hokkaido, Sapporo 061-0293, Japan; (S.S.); (S.Y.)
| | - Shinya Sakai
- Department of Rehabilitation Science, Faculty of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan; (K.Y.); (S.S.)
| |
Collapse
|
28
|
Thériault R, Landry M, Raz A. EXPRESS: The Rubber Hand Illusion: Top-Down Attention Modulates Embodiment. Q J Exp Psychol (Hove) 2022; 75:2129-2148. [PMID: 35073801 PMCID: PMC9516612 DOI: 10.1177/17470218221078858] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
The Rubber Hand Illusion (RHI) creates distortions of body ownership
through multimodal integration of somatosensory and visual inputs.
This illusion largely rests on bottom-up (automatic multisensory and
perceptual integration) mechanisms. However, the relative contribution
from top-down factors, such as controlled processes involving
attentional regulation, remains unclear. Following previous work that
highlights the putative influence of higher-order cognition in the
RHI, we aimed to further examine how modulations of working memory
load and task instructions—two conditions engaging top-down cognitive
processes—influence the experience of the RHI, as indexed by a number
of psychometric dimensions. Relying on exploratory factor analysis for
assessing this phenomenology within the RHI, our results confirm the
influence of higher-order, top-down mental processes. Whereas task
instruction strongly modulated embodiment of the rubber hand,
cognitive load altered the affective dimension of the RHI. Our
findings corroborate that top-down processes shape the phenomenology
of the RHI and herald new ways to improve experimental control over
the RHI.
Collapse
Affiliation(s)
- Rémi Thériault
- Department of Psychiatry, McGill University 14845.,Department of Psychology, Université du Québec à Montréal.,Institute for Interdisciplinary Brain and Behavioral Sciences, Chapman University
| | - Mathieu Landry
- Institute for Interdisciplinary Brain and Behavioral Sciences, Chapman University 5620.,Integrated Program in Neuroscience, Montreal Neurological Institute
| | - Amir Raz
- Department of Psychiatry, McGill University 5620.,Institute for Interdisciplinary Brain and Behavioral Sciences, Chapman University.,The Lady Davis Institute at the SMBD Jewish General Hospital
| |
Collapse
|
29
|
Sicard V, Stephenson DD, Dodd AB, Pabbathi Reddy S, Robertson-Benta CR, Ryman SG, Hanlon FM, Shaff NA, Ling JM, Hergert DC, Vakamudi K, Hogeveen J, Mayer AR. Is the prefrontal cortex organized by supramodal or modality-specific sensory demands during adolescence? Dev Cogn Neurosci 2021; 51:101006. [PMID: 34419765 PMCID: PMC8379626 DOI: 10.1016/j.dcn.2021.101006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 07/19/2021] [Accepted: 08/12/2021] [Indexed: 11/22/2022] Open
Abstract
Attention is inherently biased towards the visual modality during most multisensory scenarios in adults, but the developmental trajectory towards visual dominance has not been fully elucidated. More recent evidence in primates and adult humans suggests a modality-specific stratification of the prefrontal cortex. The current study therefore used functional magnetic resonance imaging (fMRI) to investigate the neuronal correlates of proactive (following cues) and reactive (following probes) cognitive control for simultaneous audio-visual stimulation in 67 healthy adolescents (13-18 years old). Behavioral results were only partially supportive of visual dominance in adolescents, with both reduced response times and accuracy during attend-visual relative to attend-auditory trials. Differential activation of medial and lateral prefrontal cortex for processing incongruent relative to congruent stimuli (reactive control) was also only observed during attend-visual trials. There was no evidence of modality-specific prefrontal cortex stratification during the active processing of multisensory stimuli or during separate functional connectivity analyses. Attention-related modulations were also greater within visual relative to auditory cortex, but were less robust than observed in previous adult studies. Collectively, current results suggest a continued transition towards visual dominance in adolescence, as well as limited modality-specific specialization of prefrontal cortex and attentional modulations of unisensory cortex.
Collapse
Affiliation(s)
- V Sicard
- The Mind Research Network/Lovelace Biomedical Research Institute, Albuquerque, NM, USA
| | - D D Stephenson
- The Mind Research Network/Lovelace Biomedical Research Institute, Albuquerque, NM, USA
| | - A B Dodd
- The Mind Research Network/Lovelace Biomedical Research Institute, Albuquerque, NM, USA
| | - S Pabbathi Reddy
- The Mind Research Network/Lovelace Biomedical Research Institute, Albuquerque, NM, USA
| | - C R Robertson-Benta
- The Mind Research Network/Lovelace Biomedical Research Institute, Albuquerque, NM, USA
| | - S G Ryman
- The Mind Research Network/Lovelace Biomedical Research Institute, Albuquerque, NM, USA
| | - F M Hanlon
- The Mind Research Network/Lovelace Biomedical Research Institute, Albuquerque, NM, USA
| | - N A Shaff
- The Mind Research Network/Lovelace Biomedical Research Institute, Albuquerque, NM, USA
| | - J M Ling
- The Mind Research Network/Lovelace Biomedical Research Institute, Albuquerque, NM, USA
| | - D C Hergert
- The Mind Research Network/Lovelace Biomedical Research Institute, Albuquerque, NM, USA
| | - K Vakamudi
- The Mind Research Network/Lovelace Biomedical Research Institute, Albuquerque, NM, USA
| | - J Hogeveen
- Department of Psychology, University of New Mexico, Albuquerque, NM, USA
| | - A R Mayer
- The Mind Research Network/Lovelace Biomedical Research Institute, Albuquerque, NM, USA; Department of Neurology, University of New Mexico, Albuquerque, NM, USA; Department of Emergency Medicine, University of New Mexico Health Sciences Center, Albuquerque, NM, USA; Department of Psychology, University of New Mexico, Albuquerque, NM, USA.
| |
Collapse
|
30
|
Pahor A, Collins C, Smith RN, Moon A, Stavropoulos T, Silva I, Peng E, Jaeggi SM, Seitz AR. Multisensory Facilitation of Working Memory Training. JOURNAL OF COGNITIVE ENHANCEMENT 2021; 5:386-395. [PMID: 34485810 PMCID: PMC8415034 DOI: 10.1007/s41465-020-00196-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Accepted: 10/16/2020] [Indexed: 11/29/2022]
Abstract
Research suggests that memorization of multisensory stimuli benefits performance compared to memorization of unisensory stimuli; however, little is known about multisensory facilitation in the context of working memory (WM) training and transfer. To investigate this, 240 adults were randomly assigned to an N-back training task that consisted of visual-only stimuli, alternating visual and auditory blocks, or audio-visual (multisensory) stimuli, or to a passive control group. Participants in the active groups completed 13 sessions of N-back training (6.7 hours in total) and all groups completed a battery of WM tasks: untrained N-back tasks, Corsi Blocks, Sequencing, and Symmetry Span. The Multisensory group showed similar training N-level gain compared to the Visual Only group, and both of these groups outperformed the Alternating group on the training task. As expected, all three active groups significantly improved on untrained visual N-back tasks compared to the Control group. In contrast, the Multisensory group showed significantly greater gains on the Symmetry Span task and to a certain extent on the Sequencing task compared to other groups. These results tentatively suggest that incorporating multisensory objects in a WM training protocol can benefit performance on the training task and potentially facilitate transfer to complex WM span tasks.
Collapse
Affiliation(s)
- Anja Pahor
- University of California, Riverside, Department of Psychology, Riverside, California, USA
- University of California, Irvine, School of Education, Irvine, California, USA
| | - Cindy Collins
- University of California, Riverside, Department of Psychology, Riverside, California, USA
| | - Rachel N Smith
- University of California, Irvine, School of Education, Irvine, California, USA
| | - Austin Moon
- University of California, Riverside, Department of Psychology, Riverside, California, USA
| | - Trevor Stavropoulos
- University of California, Riverside, Department of Psychology, Riverside, California, USA
| | - Ilse Silva
- University of California, Riverside, Department of Psychology, Riverside, California, USA
| | - Elaine Peng
- University of California, Riverside, Department of Psychology, Riverside, California, USA
| | - Susanne M Jaeggi
- University of California, Irvine, School of Education, School of Social Sciences (Department of Cognitive Sciences), Irvine, California, USA
| | - Aaron R Seitz
- University of California, Riverside, Department of Psychology, Riverside, California, USA
| |
Collapse
|
31
|
The role of vision and proprioception in self-motion encoding: An immersive virtual reality study. Atten Percept Psychophys 2021; 83:2865-2878. [PMID: 34341941 PMCID: PMC8460581 DOI: 10.3758/s13414-021-02344-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/10/2021] [Indexed: 11/08/2022]
Abstract
Past research on the advantages of multisensory input for remembering spatial information has mainly focused on memory for objects or surrounding environments. Less is known about the role of cue combination in memory for own body location in space. In a previous study, we investigated participants' accuracy in reproducing a rotation angle in a self-rotation task. Here, we focus on the memory aspect of the task. Participants had to rotate themselves back to a specified starting position in three different sensory conditions: a blind condition, a condition with disrupted proprioception, and a condition where both vision and proprioception were reliably available. To investigate the difference between encoding and storage phases of remembering proprioceptive information, rotation amplitude and recall delay were manipulated. The task was completed in a real testing room and in immersive virtual reality (IVR) simulations of the same environment. We found that proprioceptive accuracy is lower when vision is not available and that performance is generally less accurate in IVR. In reality conditions, the degree of rotation affected accuracy only in the blind condition, whereas in IVR, it caused more errors in both the blind condition and to a lesser degree when proprioception was disrupted. These results indicate an improvement in encoding own body location when vision and proprioception are optimally integrated. No reliable effect of delay was found.
Collapse
|
32
|
Watsjold B, Ilgen J, Monteiro S, Sibbald M, Goldberger ZD, Thompson WR, Norman G. Do you hear what you see? Utilizing phonocardiography to enhance proficiency in cardiac auscultation. PERSPECTIVES ON MEDICAL EDUCATION 2021; 10:148-154. [PMID: 33438146 PMCID: PMC8187497 DOI: 10.1007/s40037-020-00646-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 12/08/2020] [Accepted: 12/16/2020] [Indexed: 06/12/2023]
Abstract
INTRODUCTION Cardiac auscultation skills have proven difficult to train and maintain. The authors investigated whether using phonocardiograms as visual adjuncts to audio cases improved first-year medical students' cardiac auscultation performance. METHODS The authors randomized 135 first-year medical students using an email referral link in 2018 and 2019 to train using audio-only cases (audio group) or audio with phonocardiogram tracings (combined group). Training included 7 cases with normal and abnormal auscultation findings. The assessment included feature identification and diagnostic accuracy using 14 audio-only cases, 7 presented during training, and 7 alternate versions of the same diagnoses. The assessment-administered immediately after training and repeated 7 days later-prompted participants to identify the key features and diagnoses for 14 audio-only cases. Key feature scores and diagnostic accuracy were compared between groups using repeated measures ANOVA. RESULTS Mean key feature scores were statistically significantly higher in the combined group (70%, 95% CI 67-75%) compared to the audio group (61%, 95% CI 56-66%) (F(1,116) = 6.144, p = 0.015, ds = 0.45). Similarly, mean diagnostic accuracy in the combined group (68%, 95% CI 62-73%) was significantly higher than the audio group, although with small effect size (59%, 95% CI 54-65%) (F(1,116) = 4.548, p = 0.035, ds = 0.40). Time on task for the assessment and prior auscultation experience did not significantly impact performance on either measure. DISCUSSION The addition of phonocardiograms to supplement cardiac auscultation training improves diagnostic accuracy and heart sound feature identification amongst novice students compared to training with audio alone.
Collapse
Affiliation(s)
- Bjorn Watsjold
- Department of Emergency Medicine, University of Washington School of Medicine, Seattle, WA, USA.
| | - Jonathan Ilgen
- Department of Emergency Medicine, University of Washington School of Medicine, Seattle, WA, USA
- Center for Leadership & Innovation in Medical Education, University of Washington School of Medicine, Seattle, WA, USA
| | - Sandra Monteiro
- Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada
| | - Matthew Sibbald
- Division of Cardiology, Department of Medicine, McMaster University, Hamilton, ON, Canada
| | - Zachary D Goldberger
- Division of Cardiovascular Medicine, Department of Medicine, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA
| | - W Reid Thompson
- Division of Cardiology, Department of Pediatrics, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Geoff Norman
- Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
33
|
Xie Y, Li Y, Duan H, Xu X, Zhang W, Fang P. Theta Oscillations and Source Connectivity During Complex Audiovisual Object Encoding in Working Memory. Front Hum Neurosci 2021; 15:614950. [PMID: 33762914 PMCID: PMC7982740 DOI: 10.3389/fnhum.2021.614950] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 01/28/2021] [Indexed: 12/02/2022] Open
Abstract
Working memory is a limited capacity memory system that involves the short-term storage and processing of information. Neuroscientific studies of working memory have mostly focused on the essential roles of neural oscillations during item encoding from single sensory modalities (e.g., visual and auditory). However, the characteristics of neural oscillations during multisensory encoding in working memory are rarely studied. Our study investigated the oscillation characteristics of neural signals in scalp electrodes and mapped functional brain connectivity while participants encoded complex audiovisual objects in a working memory task. Experimental results showed that theta oscillations (4–8 Hz) were prominent and topographically distributed across multiple cortical regions, including prefrontal (e.g., superior frontal gyrus), parietal (e.g., precuneus), temporal (e.g., inferior temporal gyrus), and occipital (e.g., cuneus) cortices. Furthermore, neural connectivity at the theta oscillation frequency was significant in these cortical regions during audiovisual object encoding compared with single modality object encoding. These results suggest that local oscillations and interregional connectivity via theta activity play an important role during audiovisual object encoding and may contribute to the formation of working memory traces from multisensory items.
Collapse
Affiliation(s)
- Yuanjun Xie
- School of Education, Xin Yang College, Xinyang, China.,Department of Radiology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Yanyan Li
- School of Education, Xin Yang College, Xinyang, China
| | - Haidan Duan
- School of Education, Xin Yang College, Xinyang, China
| | - Xiliang Xu
- School of Education, Xin Yang College, Xinyang, China
| | - Wenmo Zhang
- Department of Fundamental, Army Logistical University, Chongqing, China.,Department of Social Medicine and Health and Management, College of Military Preventive Medicine, Army Medical University, Chongqing, China
| | - Peng Fang
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| |
Collapse
|
34
|
Commonalities of visual and auditory working memory in a spatial-updating task. Mem Cognit 2021; 49:1172-1187. [PMID: 33616864 DOI: 10.3758/s13421-021-01151-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/02/2021] [Indexed: 11/08/2022]
Abstract
Although visual and auditory inputs are initially processed in separate perception systems, studies have built on the idea that to maintain spatial information these modalities share a component of working memory. The present study used working memory navigation tasks to examine functional similarities and dissimilarities in the performance of updating tasks. Participants mentally updated the spatial location of a target in a virtual array in response to sequential pictorial and sonant directional cues before identifying the target's final location. We predicted that if working memory representations are modality-specific, mixed-modality cues would demonstrate a cost of modality switching relative to unimodal cues. The results indicate that updating performance using visual unimodal cues positively correlated with that using auditory unimodal cues. Task performance using unimodal cues was comparable to that using mixed modality cues. The results of a subsequent experiment involving updating of target traces were consistent with those of the preceding experiments and support the view of modality-nonspecific memory.
Collapse
|
35
|
Boenniger MM, Diers K, Herholz SC, Shahid M, Stöcker T, Breteler MMB, Huijbers W. A Functional MRI Paradigm for Efficient Mapping of Memory Encoding Across Sensory Conditions. Front Hum Neurosci 2021; 14:591721. [PMID: 33551773 PMCID: PMC7859438 DOI: 10.3389/fnhum.2020.591721] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Accepted: 12/02/2020] [Indexed: 11/13/2022] Open
Abstract
We introduce a new and time-efficient memory-encoding paradigm for functional magnetic resonance imaging (fMRI). This paradigm is optimized for mapping multiple contrasts using a mixed design, using auditory (environmental/vocal) and visual (scene/face) stimuli. We demonstrate that the paradigm evokes robust neuronal activity in typical sensory and memory networks. We were able to detect auditory and visual sensory-specific encoding activities in auditory and visual cortices. Also, we detected stimulus-selective activation in environmental-, voice-, scene-, and face-selective brain regions (parahippocampal place and fusiform face area). A subsequent recognition task allowed the detection of sensory-specific encoding success activity (ESA) in both auditory and visual cortices, as well as sensory-unspecific positive ESA in the hippocampus. Further, sensory-unspecific negative ESA was observed in the precuneus. Among others, the parallel mixed design enabled sustained and transient activity comparison in contrast to rest blocks. Sustained and transient activations showed great overlap in most sensory brain regions, whereas several regions, typically associated with the default-mode network, showed transient rather than sustained deactivation. We also show that the use of a parallel mixed model had relatively little influence on positive or negative ESA. Together, these results demonstrate a feasible, versatile, and brief memory-encoding task, which includes multiple sensory stimuli to guarantee a comprehensive measurement. This task is especially suitable for large-scale clinical or population studies, which aim to test task-evoked sensory-specific and sensory-unspecific memory-encoding performance as well as broad sensory activity across the life span within a very limited time frame.
Collapse
Affiliation(s)
- Meta M. Boenniger
- Population Health Sciences, German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Kersten Diers
- Image Analysis Group, German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Sibylle C. Herholz
- Population Health Sciences, German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Mohammad Shahid
- Population Health Sciences, German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Tony Stöcker
- MR Physics, German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Monique M. B. Breteler
- Population Health Sciences, German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
- Institute for Medical Biometry, Informatics and Epidemiology (IMBIE), Faculty of Medicine, University of Bonn, Bonn, Germany
| | - Willem Huijbers
- Population Health Sciences, German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| |
Collapse
|
36
|
Gallero-Salas Y, Han S, Sych Y, Voigt FF, Laurenczy B, Gilad A, Helmchen F. Sensory and Behavioral Components of Neocortical Signal Flow in Discrimination Tasks with Short-Term Memory. Neuron 2020; 109:135-148.e6. [PMID: 33159842 DOI: 10.1016/j.neuron.2020.10.017] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 09/13/2020] [Accepted: 10/12/2020] [Indexed: 12/30/2022]
Abstract
In the neocortex, each sensory modality engages distinct sensory areas that route information to association areas. Where signal flow converges for maintaining information in short-term memory and how behavior may influence signal routing remain open questions. Using wide-field calcium imaging, we compared cortex-wide neuronal activity in layer 2/3 for mice trained in auditory and tactile tasks with delayed response. In both tasks, mice were either active or passive during stimulus presentation, moving their body or sitting quietly. Irrespective of behavioral strategy, auditory and tactile stimulation activated distinct subdivisions of the posterior parietal cortex, anterior area A and rostrolateral area RL, which held stimulus-related information necessary for the respective tasks. In the delay period, in contrast, behavioral strategy rather than sensory modality determined short-term memory location, with activity converging frontomedially in active trials and posterolaterally in passive trials. Our results suggest behavior-dependent routing of sensory-driven cortical signals flow from modality-specific posterior parietal cortex (PPC) subdivisions to higher association areas.
Collapse
Affiliation(s)
- Yasir Gallero-Salas
- Brain Research Institute, University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, Zurich, Switzerland
| | - Shuting Han
- Brain Research Institute, University of Zurich, Zurich, Switzerland
| | - Yaroslav Sych
- Brain Research Institute, University of Zurich, Zurich, Switzerland
| | - Fabian F Voigt
- Brain Research Institute, University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, Zurich, Switzerland
| | - Balazs Laurenczy
- Brain Research Institute, University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, Zurich, Switzerland
| | - Ariel Gilad
- Brain Research Institute, University of Zurich, Zurich, Switzerland; Department of Medical Neurobiology, Institute for Medical Research Israel Canada, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem, Israel.
| | - Fritjof Helmchen
- Brain Research Institute, University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, Zurich, Switzerland.
| |
Collapse
|
37
|
Puszta A, Pertich Á, Giricz Z, Nyujtó D, Bodosi B, Eördegh G, Nagy A. Predicting Stimulus Modality and Working Memory Load During Visual- and Audiovisual-Acquired Equivalence Learning. Front Hum Neurosci 2020; 14:569142. [PMID: 33132883 PMCID: PMC7578848 DOI: 10.3389/fnhum.2020.569142] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Accepted: 09/01/2020] [Indexed: 11/13/2022] Open
Abstract
Scholars have extensively studied the electroencephalography (EEG) correlates of associative working memory (WM) load. However, the effect of stimulus modality on EEG patterns within this process is less understood. To fill this research gap, the present study re-analyzed EEG datasets recorded during visual and audiovisual equivalence learning tasks from earlier studies. The number of associations required to be maintained (WM load) in WM was increased using the staircase method during the acquisition phase of the tasks. The support vector machine algorithm was employed to predict WM load and stimulus modality using the power, phase connectivity, and cross-frequency coupling (CFC) values obtained during time segments with different WM loads in the visual and audiovisual tasks. A high accuracy (>90%) in predicting stimulus modality based on power spectral density and from the theta-beta CFC was observed. However, accuracy in predicting WM load was higher (≥75% accuracy) than that in predicting stimulus modality (which was at chance level) using theta and alpha phase connectivity. Under low WM load conditions, this connectivity was highest between the frontal and parieto-occipital channels. The results validated our findings from earlier studies that dissociated stimulus modality based on power-spectra and CFC during equivalence learning. Furthermore, the results emphasized the importance of alpha and theta frontoparietal connectivity in WM load.
Collapse
Affiliation(s)
- András Puszta
- Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway.,Department of Psychology, Faculty of Social Sciences, University of Oslo, Oslo, Norway.,Department of Physiology, University of Szeged, Szeged, Hungary
| | - Ákos Pertich
- Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway
| | - Zsófia Giricz
- Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway
| | - Diána Nyujtó
- Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway
| | - Balázs Bodosi
- Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway
| | - Gabriella Eördegh
- Faculty of Health Sciences and Social Studies, University of Szeged, Szeged, Hungary
| | - Attila Nagy
- Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway
| |
Collapse
|
38
|
Xu W, Kolozsvari OB, Oostenveld R, Hämäläinen JA. Rapid changes in brain activity during learning of grapheme-phoneme associations in adults. Neuroimage 2020; 220:117058. [PMID: 32561476 DOI: 10.1016/j.neuroimage.2020.117058] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Revised: 06/11/2020] [Accepted: 06/12/2020] [Indexed: 02/06/2023] Open
Abstract
Learning to associate written letters with speech sounds is crucial for the initial phase of acquiring reading skills. However, little is known about the cortical reorganization for supporting letter-speech sound learning, particularly the brain dynamics during the learning of grapheme-phoneme associations. In the present study, we trained 30 Finnish participants (mean age: 24.33 years, SD: 3.50 years) to associate novel foreign letters with familiar Finnish speech sounds on two consecutive days (first day ~ 50 min; second day ~ 25 min), while neural activity was measured using magnetoencephalography (MEG). Two sets of audiovisual stimuli were used for the training in which the grapheme-phoneme association in one set (Learnable) could be learned based on the different learning cues provided, but not in the other set (Control). The learning progress was tracked at a trial-by-trial basis and used to segment different learning stages for the MEG source analysis. The learning-related changes were examined by comparing the brain responses to Learnable and Control uni/multi-sensory stimuli, as well as the brain responses to learning cues at different learning stages over the two days. We found dynamic changes in brain responses related to multi-sensory processing when grapheme-phoneme associations were learned. Further, changes were observed in the brain responses to the novel letters during the learning process. We also found that some of these learning effects were observed only after memory consolidation the following day. Overall, the learning process modulated the activity in a large network of brain regions, including the superior temporal cortex and the dorsal (parietal) pathway. Most interestingly, middle- and inferior-temporal regions were engaged during multi-sensory memory encoding after the cross-modal relationship was extracted from the learning cues. Our findings highlight the brain dynamics and plasticity related to the learning of letter-speech sound associations and provide a more refined model of grapheme-phoneme learning in reading acquisition.
Collapse
Affiliation(s)
- Weiyong Xu
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland; Jyväskylä Centre for Interdisciplinary Brain Research, University of Jyväskylä, Jyväskylä, Finland.
| | - Orsolya Beatrix Kolozsvari
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland; Jyväskylä Centre for Interdisciplinary Brain Research, University of Jyväskylä, Jyväskylä, Finland.
| | - Robert Oostenveld
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands; NatMEG, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Jarmo Arvid Hämäläinen
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland; Jyväskylä Centre for Interdisciplinary Brain Research, University of Jyväskylä, Jyväskylä, Finland.
| |
Collapse
|
39
|
Masini A, Marini S, Leoni E, Lorusso G, Toselli S, Tessari A, Ceciliani A, Dallolio L. Active Breaks: A Pilot and Feasibility Study to Evaluate the Effectiveness of Physical Activity Levels in a School Based Intervention in an Italian Primary School. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17124351. [PMID: 32560544 PMCID: PMC7345227 DOI: 10.3390/ijerph17124351] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 06/12/2020] [Accepted: 06/16/2020] [Indexed: 02/01/2023]
Abstract
Background: The school gives access to children, regardless of age, ethnicity, gender and socio-economic class and can be identified as the key environment in which to promote children’s physical activity (PA). The guidelines of the European Union recommend accumulating at least 10-min bouts of PA to reach the daily 60 min. Active breaks (ABs) led by teachers inside the classroom represent a good strategy to promote PA. The aim of this pilot and feasibility study was to evaluate the feasibility and effectiveness in terms of PA level of an AB programme in children aged 8–9 years attending primary school. Methods: A pre-post quasi-experimental pilot and feasibility study was performed in two primary school classes, one of which was assigned to a 14-week AB intervention (AB group) and the other to the control group (CG). At baseline and at follow-up, children were monitored for sedentary and motor activity during an entire week using ActiGraph Accelerometer (ActiLife6 wGT3X-BT). The satisfaction of children and teachers was assessed by self-administered questionnaires. Results: In the pre-post comparison, AB group (n = 16) showed a reduction in the minutes spent in weekly sedentary activity (−168.7 min, p > 0.05), an increase in the number of step counts (+14,026.9, p < 0.05) and in time spent in moderate to vigorous PA (MVPA): weekly MVPA: +64.4 min, daily MVPA: +8.05 min, percentage of MVPA: +0.70%. On the contrary, CG showed a worsening in all variables. ANCOVA analysis, after adjusting for baseline values, showed significant differences between the AB group and CG for time spent in MVPA, percentage of MVPA and step counts. The satisfaction of children and teachers was good. Teachers were able to adapt the AB protocol to the needs of the school curriculum, thus confirming the feasibility of the AB programme. Conclusions: This pilot and feasibility study showed the feasibility and effectiveness of the AB protocol and represented the basis for a future controlled trial.
Collapse
Affiliation(s)
- Alice Masini
- Department of Biomedical and Neuromotor Science, University of Bologna, Bologna Via San Giacomo, 12, 40126 Bologna, Italy; (A.M.); (E.L.); (G.L.); (L.D.)
| | - Sofia Marini
- Department of Life Quality Studies, University of Bologna, Campus of Rimini, Rimini Corso d’Augusto 237, 47921 Rimini, Italy;
- Correspondence: ; Tel.: +39-05-1209-4812
| | - Erica Leoni
- Department of Biomedical and Neuromotor Science, University of Bologna, Bologna Via San Giacomo, 12, 40126 Bologna, Italy; (A.M.); (E.L.); (G.L.); (L.D.)
| | - Giovanni Lorusso
- Department of Biomedical and Neuromotor Science, University of Bologna, Bologna Via San Giacomo, 12, 40126 Bologna, Italy; (A.M.); (E.L.); (G.L.); (L.D.)
| | - Stefania Toselli
- Department of Biomedical and Neuromotor Science, University of Bologna, Bologna Via Selmi, 3, 40126 Bologna, Italy;
| | - Alessia Tessari
- Department of Psychology, University of Bologna, Bologna Viale Berti Pichat, 5, 40126 Bologna, Italy;
| | - Andrea Ceciliani
- Department of Life Quality Studies, University of Bologna, Campus of Rimini, Rimini Corso d’Augusto 237, 47921 Rimini, Italy;
| | - Laura Dallolio
- Department of Biomedical and Neuromotor Science, University of Bologna, Bologna Via San Giacomo, 12, 40126 Bologna, Italy; (A.M.); (E.L.); (G.L.); (L.D.)
| |
Collapse
|
40
|
Guida A, Abrahamse E, Dijck J. About the interplay between internal and external spatial codes in the mind: implications for serial order. Ann N Y Acad Sci 2020; 1477:20-33. [DOI: 10.1111/nyas.14341] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Revised: 02/28/2020] [Accepted: 03/10/2020] [Indexed: 11/29/2022]
Affiliation(s)
| | - Elger Abrahamse
- Communication and Cognition Tilburg University Tilburg the Netherlands
| | - Jean‐Philippe Dijck
- Department of Experimental Psychology Ghent University Ghent Belgium
- Deparment of Applied Psychology Thomas More Antwerp Belgium
| |
Collapse
|
41
|
Broadbent H, Osborne T, Mareschal D, Kirkham N. Are two cues always better than one? The role of multiple intra-sensory cues compared to multi-cross-sensory cues in children's incidental category learning. Cognition 2020; 199:104202. [PMID: 32087397 DOI: 10.1016/j.cognition.2020.104202] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 01/09/2020] [Accepted: 01/22/2020] [Indexed: 10/25/2022]
Abstract
Simultaneous presentation of multisensory cues has been found to facilitate children's learning to a greater extent than unisensory cues (e.g., Broadbent, White, Mareschal, & Kirkham, 2017). Current research into children's multisensory learning, however, does not address whether these findings are due to having multiple cross-sensory cues that enhance stimuli perception or a matter of having multiple cues, regardless of modality, that are informative to category membership. The current study examined the role of multiple cross-sensory cues (e.g., audio-visual) compared to multiple intra-sensory cues (e.g., two visual cues) on children's incidental category learning. On a computerized incidental category learning task, children aged six to ten years (N = 454) were allocated to either a visual-only (V: unisensory), auditory-only (A: unisensory), audio-visual (AV: multisensory), Visual-Visual (VV: multi-cue) or Auditory-Auditory (AA: multi-cue) condition. In children over eight years of age, the availability of two informative cues, regardless of whether they had been presented across two different modalities or within the same modality, was found to be more beneficial to incidental learning than with unisensory cues. In six-year-olds, however, the presence of multiple auditory cues (AA) did not facilitate learning to the same extent as multiple visual cues (VV) or when cues were presented across two different modalities (AV). The findings suggest that multiple sensory cues presented across or within modalities may have differential effects on children's incidental learning across middle childhood, depending on the sensory domain in which they are presented. Implications for the use of multi-cross-sensory and multiple-intra-sensory cues for children's learning across this age range are discussed.
Collapse
Affiliation(s)
- H Broadbent
- Royal Holloway, University of London, United Kingdom of Great Britain and Northern Ireland; Centre for Brain and Cognitive Development, Birkbeck University of London, United Kingdom of Great Britain and Northern Ireland.
| | - T Osborne
- Centre for Brain and Cognitive Development, Birkbeck University of London, United Kingdom of Great Britain and Northern Ireland
| | - D Mareschal
- Centre for Brain and Cognitive Development, Birkbeck University of London, United Kingdom of Great Britain and Northern Ireland
| | - N Kirkham
- Centre for Brain and Cognitive Development, Birkbeck University of London, United Kingdom of Great Britain and Northern Ireland
| |
Collapse
|
42
|
Valori I, McKenna-Plumley PE, Bayramova R, Zandonella Callegher C, Altoè G, Farroni T. Proprioceptive accuracy in Immersive Virtual Reality: A developmental perspective. PLoS One 2020; 15:e0222253. [PMID: 31999710 PMCID: PMC6992210 DOI: 10.1371/journal.pone.0222253] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Accepted: 01/12/2020] [Indexed: 11/19/2022] Open
Abstract
Proprioceptive development relies on a variety of sensory inputs, among which vision is hugely dominant. Focusing on the developmental trajectory underpinning the integration of vision and proprioception, the present research explores how this integration is involved in interactions with Immersive Virtual Reality (IVR) by examining how proprioceptive accuracy is affected by Age, Perception, and Environment. Individuals from 4 to 43 years old completed a self-turning task which asked them to manually return to a previous location with different sensory modalities available in both IVR and reality. Results were interpreted from an exploratory perspective using Bayesian model comparison analysis, which allows the phenomena to be described using probabilistic statements rather than simplified reject/not-reject decisions. The most plausible model showed that 4-8-year-old children can generally be expected to make more proprioceptive errors than older children and adults. Across age groups, proprioceptive accuracy is higher when vision is available, and is disrupted in the visual environment provided by the IVR headset. We can conclude that proprioceptive accuracy mostly develops during the first eight years of life and that it relies largely on vision. Moreover, our findings indicate that this proprioceptive accuracy can be disrupted by the use of an IVR headset.
Collapse
Affiliation(s)
- Irene Valori
- Department of Developmental Psychology and Socialization, University of Padova, Padova, Italy
| | | | - Rena Bayramova
- Department of General Psychology, University of Padova, Padova, Italy
| | | | - Gianmarco Altoè
- Department of Developmental Psychology and Socialization, University of Padova, Padova, Italy
| | - Teresa Farroni
- Department of Developmental Psychology and Socialization, University of Padova, Padova, Italy
- * E-mail:
| |
Collapse
|
43
|
The neural basis of complex audiovisual objects maintenances in working memory. Neuropsychologia 2019; 133:107189. [PMID: 31513808 DOI: 10.1016/j.neuropsychologia.2019.107189] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Revised: 09/02/2019] [Accepted: 09/07/2019] [Indexed: 11/20/2022]
Abstract
Working memory research has primarily concentrated on studying our senses separately; the neural basis of maintaining information from multiple sensory modalities in working memory has been not well elucidated. It is debated whether multisensory information is maintained in the form of modality-specific representations or amodal representations. The present study investigated what brain regions were engaged in both types of complex audiovisual objects maintenances (semantically congruent and incongruent) using functional magnetic resonance imaging and conjunction analysis, and examined in which form to maintain multisensory objects information in working memory. The conjunction analysis showed that there was common brain regions activation involving left parietal cortex (e.g., left angular gyrus, supramarginal gyrus, and precuneus) while maintaining semantically congruent audiovisual object, whereas the common brain regions activation including the bilateral angular, left superior parietal lobule, and left middle temporal gyrus was found during maintaining semantically incongruent audiovisual objects. Importantly, the shared conjoint brain regions activation consists of bilateral angular gyrus and left middle frontal gyrus was observed while maintaining both types of semantically congruent and incongruent complex audiovisual objects. These brain regions may play different role while maintaining these complex multisensory objects, such as supramodel storage per se and intentional attention. The findings of the present studymight support the amodal view that working memory has a central storage system to maintain multisensory information from different sensory inputs.
Collapse
|
44
|
Császár-Nagy N, Kapócs G, Bókkon I. Classic psychedelics: the special role of the visual system. Rev Neurosci 2019; 30:651-669. [PMID: 30939118 DOI: 10.1515/revneuro-2018-0092] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2018] [Accepted: 11/05/2018] [Indexed: 12/23/2022]
Abstract
Here, we briefly overview the various aspects of classic serotonergic hallucinogens reported by a number of studies. One of the key hypotheses of our paper is that the visual effects of psychedelics might play a key role in resetting fears. Namely, we especially focus on visual processes because they are among the most prominent features of hallucinogen-induced hallucinations. We hypothesize that our brain has an ancient visual-based (preverbal) intrinsic cognitive process that, during the transient inhibition of top-down convergent and abstract thinking (mediated by the prefrontal cortex) by psychedelics, can neutralize emotional fears of unconscious and conscious life experiences from the past. In these processes, the decreased functional integrity of the self-referencing processes of the default mode network, the modified multisensory integration (linked to bodily self-consciousness and self-awareness), and the modified amygdala activity may also play key roles. Moreover, the emotional reset (elimination of stress-related emotions) by psychedelics may induce psychological changes and overwrite the stress-related neuroepigenetic information of past unconscious and conscious emotional fears.
Collapse
Affiliation(s)
- Noemi Császár-Nagy
- National University of Public Services, Budapest, Hungary.,Psychosomatic Outpatient Clinics, Budapest, Hungary
| | - Gábor Kapócs
- Saint John Hospital, Budapest, Hungary.,Institute of Behavioral Sciences, Semmelweis University, Budapest, Hungary
| | - István Bókkon
- Psychosomatic Outpatient Clinics, Budapest, Hungary.,Vision Research Institute, Neuroscience and Consciousness Research Department, Lowell, MA, USA
| |
Collapse
|
45
|
Császár N, Kapócs G, Bókkon I. A possible key role of vision in the development of schizophrenia. Rev Neurosci 2019; 30:359-379. [PMID: 30244235 DOI: 10.1515/revneuro-2018-0022] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 08/01/2018] [Indexed: 12/12/2022]
Abstract
Based on a brief overview of the various aspects of schizophrenia reported by numerous studies, here we hypothesize that schizophrenia may originate (and in part be performed) from visual areas. In other words, it seems that a normal visual system or at least an evanescent visual perception may be an essential prerequisite for the development of schizophrenia as well as of various types of hallucinations. Our study focuses on auditory and visual hallucinations, as they are the most prominent features of schizophrenic hallucinations (and also the most studied types of hallucinations). Here, we evaluate the possible key role of the visual system in the development of schizophrenia.
Collapse
Affiliation(s)
- Noemi Császár
- Gaspar Karoly University Psychological Institute, H-1091 Budapest, Hungary.,Psychoszomatic Outpatient Department, H-1037 Budapest, Hungary
| | - Gabor Kapócs
- Buda Family Centred Mental Health Centre, Department of Psychiatry and Psychiatric Rehabilitation, St. John Hospital, Budapest, Hungary
| | - István Bókkon
- Psychoszomatic Outpatient Department, H-1037 Budapest, Hungary.,Vision Research Institute, Neuroscience and Consciousness Research Department, 25 Rita Street, Lowell, MA 01854, USA
| |
Collapse
|
46
|
Puszta A, Pertich Á, Katona X, Bodosi B, Nyujtó D, Giricz Z, Eördegh G, Nagy A. Power-spectra and cross-frequency coupling changes in visual and Audio-visual acquired equivalence learning. Sci Rep 2019; 9:9444. [PMID: 31263168 PMCID: PMC6603188 DOI: 10.1038/s41598-019-45978-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2018] [Accepted: 06/17/2019] [Indexed: 11/09/2022] Open
Abstract
The three phases of the applied acquired equivalence learning test, i.e. acquisition, retrieval and generalization, investigate the capabilities of humans in associative learning, working memory load and rule-transfer, respectively. Earlier findings denoted the role of different subcortical structures and cortical regions in the visual test. However, there is a lack of information about how multimodal cues modify the EEG-patterns during acquired equivalence learning. To test this we have recorded EEG from 18 healthy volunteers and analyzed the power spectra and the strength of cross-frequency coupling, comparing a unimodal visual-guided and a bimodal, audio-visual-guided paradigm. We found that the changes in the power of the different frequency band oscillations were more critical during the visual paradigm and they showed less synchronized activation compared to the audio-visual paradigm. These findings indicate that multimodal cues require less prominent, but more synchronized cortical contribution, which might be a possible biomarker of forming multimodal associations.
Collapse
Affiliation(s)
- András Puszta
- Department of Physiology, Faculty of Medicine, University of Szeged, Dóm tér 10, Szeged, Hungary.
| | - Ákos Pertich
- Department of Physiology, Faculty of Medicine, University of Szeged, Dóm tér 10, Szeged, Hungary
| | - Xénia Katona
- Department of Physiology, Faculty of Medicine, University of Szeged, Dóm tér 10, Szeged, Hungary
| | - Balázs Bodosi
- Department of Physiology, Faculty of Medicine, University of Szeged, Dóm tér 10, Szeged, Hungary
| | - Diána Nyujtó
- Department of Physiology, Faculty of Medicine, University of Szeged, Dóm tér 10, Szeged, Hungary
| | - Zsófia Giricz
- Department of Physiology, Faculty of Medicine, University of Szeged, Dóm tér 10, Szeged, Hungary
| | - Gabriella Eördegh
- Department of Oral Biology and Experimental Dental Research, Faculty of Dentistry, University of Szeged, Tisza Lajos krt. 64, Szeged, Hungary
| | - Attila Nagy
- Department of Physiology, Faculty of Medicine, University of Szeged, Dóm tér 10, Szeged, Hungary.
| |
Collapse
|
47
|
Williams O, Swierad EM. A Multisensory Multilevel Health Education Model for Diverse Communities. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2019; 16:E872. [PMID: 30857345 PMCID: PMC6427730 DOI: 10.3390/ijerph16050872] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Revised: 02/18/2019] [Accepted: 03/05/2019] [Indexed: 11/16/2022]
Abstract
Owing to their enormous capacity to improve health and save lives, effective health promotion frameworks have been at the forefront of public health research and practice. A multilevel focus, as exemplified by the Socio-Ecological Model (SEM), is one common denominator among these frameworks. The SEM highlights important social and ecological influences on health behavior by delineating the different levels of influence. These include public policy, organizational, community, interpersonal, and intrapersonal levels, which, when considered during the development of health promotion campaigns, especially those that focus on health education, strengthen the potential influence of that campaign on targeted behaviors. However, the SEM lacks a complimenting framework for understanding the role of conventional and unconventional approaches to health education; that is, how to design a health education intervention that considers both the context, such as the social and ecological levels of influence, and the best approaches for developing and delivering the health education in a manner that optimizes their effectiveness in today's modern and increasingly diverse world. Addressing this gap, the current article presents an integrative Multisensory Multilevel Health Education Model (MMHEM), which incorporates three key domains-(1) Art (innovativeness/creativity), (2) Culture (cultural tailoring), and (3) Science (evidence-based), while promoting the importance of considering the socio-ecological levels of influence on targeted behaviors. Using a successful health education intervention, called the Hip Hop Stroke, we deconstruct the Multisensory Multilevel Health Education Model and discuss its potential role as a guide for developing public health education interventions.
Collapse
Affiliation(s)
- Olajide Williams
- Department of Neurology, Columbia University Medical Center, New York, NY 10032, USA.
| | - Ewelina M Swierad
- Department of Neurology, Columbia University Medical Center, New York, NY 10032, USA.
| |
Collapse
|
48
|
Begum MM, Uddin MS, Rithy JF, Kabir J, Tewari D, Islam A, Ashraf GM. Analyzing the Impact of Soft, Stimulating and Depressing Songs on Attention Among Undergraduate Students: A Cross-Sectional Pilot Study in Bangladesh. Front Psychol 2019; 10:161. [PMID: 30804845 PMCID: PMC6371049 DOI: 10.3389/fpsyg.2019.00161] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Accepted: 01/17/2019] [Indexed: 01/26/2023] Open
Abstract
Music is strongly linked to attention and giving attention can boost intelligence. The purpose of this study was to scrutinize the impact of soft, stimulating, and depressing songs on the attention of students. The study was performed on 280 undergraduate students. Students were divided into 4 groups (i.e., control, soft, stimulating, and depressing) and subjected to 3 songs, soft (That’s My Name), stimulating (Rain Over Me) and depressing (Broken Angel) songs. The Uddin’s Numeral Finding (NF) and Typo Revealing (TR) tests were used to analyze the attention of the students. In the NF, 75.54% attention was exerted by students subjected to stimulating song followed by soft song’s group (i.e., 74.32%) with respect to control group. Amid all groups, the lowest percentage, 70.77% of attention was reported for students subjected to the depressing song. For TR test, stimulating song’s group exerted highest, 45.97% attention, soft song’s group exerted 45.27%, control group exerted 42.70%, and lowest (i.e., 41.54%) attention was exerted by depressing song’s group. In NF test, concerning sex, amid male and female, male exerted higher (77.04%) attention than female but for TR test female exerted higher (i.e., 48.15%) attention for students subjected to stimulating song. Regarding the age of the study in case of NF test for stimulating song’s group, 18–20 years age students exerted highest, 82.07% attention but for TR test highest, 48.75% attention was reported for 23–25 years age students. For NF test, regarding the age of the study 1st-year student exerted highest, 92.44% attention but for TR test highest, 57.33% attention was reported for 3rd-year students for stimulating song’s group. Concerning residential status in both NF and TR tests, for students lived with family subjected to stimulating song exerted highest, 77.93% and 48.6% attention, respectively with respect to students lived without family and remaining groups. This study suggested that song influences the neuronal circuits linked to alert and cognitive functions and the stimulating song has the acme power of increasing attention while depressing song reduces the attention. Therefore, the exciting song can be an operative intervention for enhancing attention, cognitive functions, and treatment of associated neuropsychological disorders.
Collapse
Affiliation(s)
| | - Md Sahab Uddin
- Department of Pharmacy, Southeast University, Dhaka, Bangladesh
| | | | - Janisa Kabir
- Department of Pharmacy, East West University, Dhaka, Bangladesh
| | - Devesh Tewari
- Department of Pharmaceutical Sciences, Faculty of Technology, Kumaun University, Uttarakhand, India
| | - Azharul Islam
- Department of Pharmacy, Dhaka International University, Dhaka, Bangladesh
| | - Ghulam Md Ashraf
- King Fahd Medical Research Center, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
49
|
Barutchu A, Sahu A, Humphreys GW, Spence C. Multisensory processing in event-based prospective memory. Acta Psychol (Amst) 2019; 192:23-30. [PMID: 30391627 DOI: 10.1016/j.actpsy.2018.10.015] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Revised: 08/29/2018] [Accepted: 10/23/2018] [Indexed: 11/28/2022] Open
Abstract
Failures in prospective memory (PM) - that is, the failure to remember intended future actions - can have adverse consequences. It is therefore important to study those processes that may help to minimize such cognitive failures. Although multisensory integration has been shown to enhance a wide variety of behaviors, including perception, learning, and memory, its effect on prospective memory, in particular, is largely unknown. In the present study, we investigated the effects of multisensory processing on two simultaneously-performed memory tasks: An ongoing 2- or 3-back working memory (WM) task (20% target ratio), and a PM task in which the participants had to respond to a rare predefined letter (8% target ratio). For PM trials, multisensory enhancement was observed for congruent multisensory signals; however, this effect did not generalize to the ongoing WM task. Participants were less likely to make errors for PM than for WM trials, thus suggesting that they may have biased their attention toward the PM task. Multisensory advantages on memory tasks, such as PM and WM, may be dependent on how attention resources are allocated across dual tasks.
Collapse
Affiliation(s)
- Ayla Barutchu
- Department of Experimental Psychology, University of Oxford, United Kingdom.
| | - Aparna Sahu
- Department of Experimental Psychology, University of Oxford, United Kingdom
| | - Glyn W Humphreys
- Department of Experimental Psychology, University of Oxford, United Kingdom
| | - Charles Spence
- Department of Experimental Psychology, University of Oxford, United Kingdom
| |
Collapse
|
50
|
Frith E, Loprinzi PD. Food insecurity and cognitive function in older adults: Brief report. Clin Nutr 2018; 37:1765-1768. [DOI: 10.1016/j.clnu.2017.07.001] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2017] [Revised: 06/15/2017] [Accepted: 07/01/2017] [Indexed: 11/28/2022]
|