1
|
Generalized modality responses in primary sensory neurons of awake mice during the development of neuropathic pain. Front Neurosci 2024; 18:1368507. [PMID: 38690372 PMCID: PMC11058805 DOI: 10.3389/fnins.2024.1368507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 03/28/2024] [Indexed: 05/02/2024] Open
Abstract
Introduction Peripheral sensory neurons serve as the initial responders to the external environment. How these neurons react to different sensory stimuli, such as mechanical or thermal forces applied to the skin, remains unclear. Methods Using in vivo two-photon Ca2+ imaging in the lumbar 4 dorsal root ganglion (DRG) of awake Thy1.2-GCaMP6s mice, we assessed neuronal responses to various mechanical (punctate or dynamic) and thermal forces (heat or cold) sequentially applied to the paw plantar surface. Results Our data indicate that in normal awake male mice, approximately 14 and 38% of DRG neurons respond to either single or multiple modalities of stimulation. Anesthesia substantially reduces the number of responsive neurons but does not alter the ratio of cells exhibiting single-modal responses versus multi-modal responses. Following peripheral nerve injury, DRG cells exhibit a more than 5.1-fold increase in spontaneous neuronal activity and a 1.5-fold increase in sensory stimulus-evoked activity. As neuropathic pain resulting from nerve injury progresses, the polymodal nature of sensory neurons intensifies. The polymodal population increases from 39.1 to 56.9%, while the modality-specific population decreases from 14.7 to 5.0% within a period of 5 days. Discussion Our study underscores polymodality as a significant characteristic of primary sensory neurons, which becomes more pronounced during the development of neuropathic pain.
Collapse
|
2
|
Positive Emotional Responses to Socially Assistive Robots in People With Dementia: Pilot Study. JMIR Aging 2024; 7:e52443. [PMID: 38623717 PMCID: PMC11034362 DOI: 10.2196/52443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 03/06/2024] [Accepted: 03/07/2024] [Indexed: 04/17/2024] Open
Abstract
Background Interventions and care that can evoke positive emotions and reduce apathy or agitation are important for people with dementia. In recent years, socially assistive robots used for better dementia care have been found to be feasible. However, the immediate responses of people with dementia when they are given multiple sensory modalities from socially assistive robots have not yet been sufficiently elucidated. Objective This study aimed to quantitatively examine the immediate emotional responses of people with dementia to stimuli presented by socially assistive robots using facial expression analysis in order to determine whether they elicited positive emotions. Methods This pilot study adopted a single-arm interventional design. Socially assistive robots were presented to nursing home residents in a three-step procedure: (1) the robot was placed in front of participants (visual stimulus), (2) the robot was manipulated to produce sound (visual and auditory stimuli), and (3) participants held the robot in their hands (visual, auditory, and tactile stimuli). Expression intensity values for "happy," "sad," "angry," "surprised," "scared," and "disgusted" were calculated continuously using facial expression analysis with FaceReader. Additionally, self-reported feelings were assessed using a 5-point Likert scale. In addition to the comparison between the subjective and objective emotional assessments, expression intensity values were compared across the aforementioned 3 stimuli patterns within each session. Finally, the expression intensity value for "happy" was compared between the different types of robots. Results A total of 29 participants (mean age 88.7, SD 6.2 years; n=27 female; Japanese version of Mini-Mental State Examination mean score 18.2, SD 5.1) were recruited. The expression intensity value for "happy" was the largest in both the subjective and objective assessments and increased significantly when all sensory modalities (visual, auditory, and tactile) were presented (median expression intensity 0.21, IQR 0.09-0.35) compared to the other 2 patterns (visual alone: median expression intensity 0.10, IQR 0.03-0.22; P<.001; visual and auditory: median expression intensity 0.10, IQR 0.04-0.23; P<.001). The comparison of different types of robots revealed a significant increase when all stimuli were presented by doll-type and animal-type robots, but not humanoid-type robots. Conclusions By quantifying the emotional responses of people with dementia, this study highlighted that socially assistive robots may be more effective in eliciting positive emotions when multiple sensory stimuli, including tactile stimuli, are involved. More studies, including randomized controlled trials, are required to further explore the effectiveness of using socially assistive robots in dementia care.
Collapse
|
3
|
Attention-sensitive signalling by 7- to 20-month-old infants in a comparative perspective. Front Psychol 2024; 15:1257324. [PMID: 38562240 PMCID: PMC10982422 DOI: 10.3389/fpsyg.2024.1257324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 03/07/2024] [Indexed: 04/04/2024] Open
Abstract
Attention-sensitive signalling is the pragmatic skill of signallers who adjust the modality of their communicative signals to their recipient's attention state. This study provides the first comprehensive evidence for its onset and development in 7-to 20-month-olds human infants, and underlines its significance for language acquisition and evolutionary history. Mother-infant dyads (N = 30) were studied in naturalistic settings, sampled according to three developmental periods (in months); [7-10], [11-14], and [15-20]. Infant's signals were classified by dominant perceptible sensory modality and proportions compared according to their mother's visual attention, infant-directed speech and tactile contact. Maternal visual attention and infant-directed speech were influential on the onset and steepness of infants' communicative adjustments. The ability to inhibit silent-visual signals towards visually inattentive mothers (unimodal adjustment) predated the ability to deploy audible-or-contact signals in this case (cross-modal adjustment). Maternal scaffolding of infant's early pragmatic skills through her infant-directed speech operates on the facilitation of infant's unimodal adjustment, the preference for oral over gestural signals, and the audio-visual combinations of signals. Additionally, breakdowns in maternal visual attention are associated with increased use of the audible-oral modality/channel. The evolutionary role of the sharing of attentional resources between parents and infants into the emergence of modern language is discussed.
Collapse
|
4
|
Visual movement impairs duration discrimination at short intervals. Q J Exp Psychol (Hove) 2024; 77:57-69. [PMID: 36717537 PMCID: PMC10712207 DOI: 10.1177/17470218231156542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 01/19/2023] [Accepted: 01/24/2023] [Indexed: 02/01/2023]
Abstract
The classic advantage of audition over vision in time processing has been recently challenged by studies using continuously moving visual stimuli such as bouncing balls. Bouncing balls drive beat-based synchronisation better than static visual stimuli (flashes) and as efficiently as auditory ones (beeps). It is yet unknown how bouncing balls modulate performance in duration perception. Our previous study addressing this was inconclusive: there were no differences among bouncing balls, flashes, and beeps, but this could have been due to the fact that intervals were too long to allow sensitivity to modality (visual vs auditory). In this study, we conducted a first experiment to determine whether shorter intervals elicit cross-stimulus differences. We found that short (mean 157 ms) but not medium (326 ms) intervals made duration perception worse for bouncing balls compared with flashes and beeps. In a second experiment, we investigated whether the lower efficiency of bouncing balls was due to experimental confounds, lack of realism, or movement. We ruled out the experimental confounds and found support for the hypothesis that visual movement-be it continuous or discontinuous-impairs duration perception at short interval lengths. Therefore, unlike beat-based synchronisation, duration perception does not benefit from continuous visual movement, which may even have a detrimental effect at short intervals.
Collapse
|
5
|
Distinct Patterns of Connectivity between Brain Regions Underlie the Intra-Modal and Cross-Modal Value-Driven Modulations of the Visual Cortex. J Neurosci 2023; 43:7361-7375. [PMID: 37684031 PMCID: PMC10621764 DOI: 10.1523/jneurosci.0355-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 07/30/2023] [Accepted: 08/26/2023] [Indexed: 09/10/2023] Open
Abstract
Past reward associations may be signaled from different sensory modalities; however, it remains unclear how different types of reward-associated stimuli modulate sensory perception. In this human fMRI study (female and male participants), a visual target was simultaneously presented with either an intra- (visual) or a cross-modal (auditory) cue that was previously associated with rewards. We hypothesized that, depending on the sensory modality of the cues, distinct neural mechanisms underlie the value-driven modulation of visual processing. Using a multivariate approach, we confirmed that reward-associated cues enhanced the target representation in early visual areas and identified the brain valuation regions. Then, using an effective connectivity analysis, we tested three possible patterns of connectivity that could underlie the modulation of the visual cortex: a direct pathway from the frontal valuation areas to the visual areas, a mediated pathway through the attention-related areas, and a mediated pathway that additionally involved sensory association areas. We found evidence for the third model demonstrating that the reward-related information in both sensory modalities is communicated across the valuation and attention-related brain regions. Additionally, the superior temporal areas were recruited when reward was cued cross-modally. The strongest dissociation between the intra- and cross-modal reward-driven effects was observed at the level of the feedforward and feedback connections of the visual cortex estimated from the winning model. These results suggest that, in the presence of previously rewarded stimuli from different sensory modalities, a combination of domain-general and domain-specific mechanisms are recruited across the brain to adjust the visual perception.SIGNIFICANCE STATEMENT Reward has a profound effect on perception, but it is not known whether shared or disparate mechanisms underlie the reward-driven effects across sensory modalities. In this human fMRI study, we examined the reward-driven modulation of the visual cortex by visual (intra-modal) and auditory (cross-modal) reward-associated cues. Using a model-based approach to identify the most plausible pattern of inter-regional effective connectivity, we found that higher-order areas involved in the valuation and attentional processing were recruited by both types of rewards. However, the pattern of connectivity between these areas and the early visual cortex was distinct between the intra- and cross-modal rewards. This evidence suggests that, to effectively adapt to the environment, reward signals may recruit both domain-general and domain-specific mechanisms.
Collapse
|
6
|
Value-driven modulation of visual perception by visual and auditory reward cues: The role of performance-contingent delivery of reward. Front Hum Neurosci 2022; 16:1062168. [PMID: 36618995 PMCID: PMC9816136 DOI: 10.3389/fnhum.2022.1062168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 12/06/2022] [Indexed: 12/24/2022] Open
Abstract
Perception is modulated by reward value, an effect elicited not only by stimuli that are predictive of performance-contingent delivery of reward (PC) but also by stimuli that were previously rewarded (PR). PC and PR cues may engage different mechanisms relying on goal-driven versus stimulus-driven prioritization of high value stimuli, respectively. However, these two modes of reward modulation have not been systematically compared against each other. This study employed a behavioral paradigm where participants' visual orientation discrimination was tested in the presence of task-irrelevant visual or auditory reward cues. In the first phase (PC), correct performance led to a high or low monetary reward dependent on the identity of visual or auditory cues. In the subsequent phase (PR), visual or auditory cues were not followed by reward delivery anymore. We hypothesized that PC cues have a stronger modulatory effect on visual discrimination and pupil responses compared to PR cues. We found an overall larger task-evoked pupil dilation in PC compared to PR phase. Whereas PC and PR cues both increased the accuracy of visual discrimination, value-driven acceleration of reaction times (RTs) and pupillary responses only occurred for PC cues. The modulation of pupil size by high reward PC cues was strongly correlated with the modulation of a combined measure of speed and accuracy. These results indicate that although value-driven modulation of perception can occur even when reward delivery is halted, stronger goal-driven control elicited by PC reward cues additionally results in a more efficient balance between accuracy and speed of perceptual choices.
Collapse
|
7
|
Multisensory Integration in Caenorhabditis elegans in Comparison to Mammals. Brain Sci 2022; 12:brainsci12101368. [PMID: 36291302 PMCID: PMC9599712 DOI: 10.3390/brainsci12101368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 09/28/2022] [Accepted: 10/03/2022] [Indexed: 11/30/2022] Open
Abstract
Multisensory integration refers to sensory inputs from different sensory modalities being processed simultaneously to produce a unitary output. Surrounded by stimuli from multiple modalities, animals utilize multisensory integration to form a coherent and robust representation of the complex environment. Even though multisensory integration is fundamentally essential for animal life, our understanding of the underlying mechanisms, especially at the molecular, synaptic and circuit levels, remains poorly understood. The study of sensory perception in Caenorhabditis elegans has begun to fill this gap. We have gained a considerable amount of insight into the general principles of sensory neurobiology owing to C. elegans’ highly sensitive perceptions, relatively simple nervous system, ample genetic tools and completely mapped neural connectome. Many interesting paradigms of multisensory integration have been characterized in C. elegans, for which input convergence occurs at the sensory neuron or the interneuron level. In this narrative review, we describe some representative cases of multisensory integration in C. elegans, summarize the underlying mechanisms and compare them with those in mammalian systems. Despite the differences, we believe C. elegans is able to provide unique insights into how processing and integrating multisensory inputs can generate flexible and adaptive behaviors. With the emergence of whole brain imaging, the ability of C. elegans to monitor nearly the entire nervous system may be crucial for understanding the function of the brain as a whole.
Collapse
|
8
|
Non-linear multimodal integration in a distributed premotor network controls proprioceptive reflex gain in the insect leg. Curr Biol 2022; 32:3847-3854.e3. [PMID: 35896118 DOI: 10.1016/j.cub.2022.07.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 05/30/2022] [Accepted: 07/05/2022] [Indexed: 11/28/2022]
Abstract
Producing context-appropriate motor acts requires integrating multiple sensory modalities. Presynaptic inhibition of proprioceptive afferent neurons1-4 and afferents of different modalities targeting the same motor neurons (MNs)5-7 underlies some of this integration. However, in most systems, an interneuronal network is interposed between sensory afferents and MNs. How these networks contribute to this integration, particularly at single-neuron resolution, is little understood. Context-specific integration of load and movement sensory inputs occurs in the stick insect locomotory system,6,8-12 and both inputs feed into a network of premotor nonspiking interneurons (NSIs).8 We analyzed how load altered movement signal processing in the stick insect femur-tibia (FTi) joint control system by tracing the interaction of FTi movement13-15 (femoral chordotonal organ [fCO]) and load13,15,16 (tibial campaniform sensilla [CS]) signals through the NSI network to the slow extensor tibiae (SETi) MN, the extensor MN primarily active in non-walking animals.17-19 On the afferent level, load reduced movement signal gain by presynaptic inhibition. In the NSI network, graded responses to movement and load inputs summed nonlinearly, increasing the gain of NSIs opposing movement-induced reflexes and thus decreasing the SETi and extensor tibiae muscle movement reflex responses. Gain modulation was movement-parameter specific and required presynaptic inhibition. These data suggest that gain changes in distributed premotor networks, specifically the relative weighting of antagonistic pathways, could be a general mechanism by which multiple sensory modalities are integrated to generate context-appropriate motor activity.
Collapse
|
9
|
Response Inhibitory Control Varies with Different Sensory Modalities. Cereb Cortex 2021; 32:275-285. [PMID: 34223874 DOI: 10.1093/cercor/bhab207] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 06/04/2021] [Accepted: 06/07/2021] [Indexed: 11/12/2022] Open
Abstract
Response inhibition plays an essential role in preventing anticipated and unpredictable events in our daily lives. It is divided into proactive inhibition, where subjects postpone responses to an upcoming signal, and reactive inhibition, where subjects stop an impending movement based on the presentation of a signal. Different types of sensory input are involved in both inhibitions; however, differences in proactive and reactive inhibition with differences in sensory modalities remain unclear. This study compared proactive and reactive inhibitions induced by visual, auditory, and somatosensory signals using the choice reaction task (CRT) and stop-signal task (SST). The experiments showed that proactive inhibitions were significantly higher in the auditory and somatosensory modalities than in the visual modality, whereas reactive inhibitions were not. Examining the proactive inhibition-associated neural processing, the auditory and somatosensory modalities showed significant decreases in P3 amplitudes in Go signal-locked event-related potentials (ERPs) in SST relative to those in CRT; this might reflect a decreasing attentional resource on response execution in SST in both modalities. In contrast, we did not find significant differences in the reactive inhibition-associated ERPs. These results suggest that proactive inhibition varies with different sensory modalities, whereas reactive inhibition does not.
Collapse
|
10
|
Multisensory Integration Is Modulated by Hypnotizability. Int J Clin Exp Hypn 2021; 69:215-224. [PMID: 33560171 DOI: 10.1080/00207144.2021.1877089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
This study investigated multisensory integration in 29 medium-to-high (mid-highs) and 24 low-to-medium (mid-lows) hypnotizable individuals, classified according to the Stanford Hypnotic Susceptibility Scale, Form A. Participants completed a simultaneity judgment (SJ) task, where an auditory and a visual stimulus were presented in close proximity to their body in a range of 11 stimulus onset asynchronies. Results show that mid-highs were prone to judge audiovisual stimuli as simultaneous over a wider range of time intervals between sensory stimuli, as expressed by a broader temporal binding window, when the visual stimulus precedes the auditory one. No significant difference was observed for response times. Findings indicate a role of hypnotizability in multisensory integration likely due to the highs' cerebellar peculiarities and/or sensory modality preference.
Collapse
|
11
|
The Differential Effects of Auditory and Visual Stimuli on Learning, Retention and Reactivation of a Perceptual-Motor Temporal Sequence in Children With Developmental Coordination Disorder. Front Hum Neurosci 2021; 15:616795. [PMID: 33867955 PMCID: PMC8044544 DOI: 10.3389/fnhum.2021.616795] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Accepted: 03/03/2021] [Indexed: 11/13/2022] Open
Abstract
This study investigates the procedural learning, retention, and reactivation of temporal sensorimotor sequences in children with and without developmental coordination disorder (DCD). Twenty typically-developing (TD) children and 12 children with DCD took part in this study. The children were required to tap on a keyboard, synchronizing with auditory or visual stimuli presented as an isochronous temporal sequence, and practice non-isochronous temporal sequences to memorize them. Immediate and delayed retention of the audio-motor and visuo-motor non-isochronous sequences were tested by removing auditory or visual stimuli immediately after practice and after a delay of 2 h. A reactivation test involved reintroducing the auditory and visual stimuli after the delayed recall. Data were computed via circular analyses to obtain asynchrony, the stability of synchronization and errors (i.e., the number of supplementary taps). Firstly, an overall deficit in synchronization with both auditory and visual isochronous stimuli was observed in DCD children compared to TD children. During practice, further improvements (decrease in asynchrony and increase in stability) were found for the audio-motor non-isochronous sequence compared to the visuo-motor non-isochronous sequence in both TD children and children with DCD. However, a drastic increase in errors occurred in children with DCD during immediate retention as soon as the auditory stimuli were removed. Reintroducing auditory stimuli decreased errors in the audio-motor sequence for children with DCD. Such changes were not seen for the visuo-motor non-isochronous sequence, which was equally learned, retained and reactivated in DCD and TD children. All these results suggest that TD children benefit from both auditory and visual stimuli to memorize the sequence, whereas children with DCD seem to present a deficit in integrating an audio-motor sequence in their memory. The immediate effect of reactivation suggests a specific dependency on auditory information in DCD. Contrary to the audio-motor sequence, the visuo-motor sequence was both learned and retained in children with DCD. This suggests that visual stimuli could be the best information for memorizing a temporal sequence in DCD. All these results are discussed in terms of a specific audio-motor coupling deficit in DCD.
Collapse
|
12
|
Exploiting common senses: sensory ecology meets wildlife conservation and management. CONSERVATION PHYSIOLOGY 2021; 9:coab002. [PMID: 33815799 PMCID: PMC8009554 DOI: 10.1093/conphys/coab002] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2020] [Revised: 10/27/2020] [Accepted: 01/06/2021] [Indexed: 05/21/2023]
Abstract
Multidisciplinary approaches to conservation and wildlife management are often effective in addressing complex, multi-factor problems. Emerging fields such as conservation physiology and conservation behaviour can provide innovative solutions and management strategies for target species and systems. Sensory ecology combines the study of 'how animals acquire' and process sensory stimuli from their environments, and the ecological and evolutionary significance of 'how animals respond' to this information. We review the benefits that sensory ecology can bring to wildlife conservation and management by discussing case studies across major taxa and sensory modalities. Conservation practices informed by a sensory ecology approach include the amelioration of sensory traps, control of invasive species, reduction of human-wildlife conflicts and relocation and establishment of new populations of endangered species. We illustrate that sensory ecology can facilitate the understanding of mechanistic ecological and physiological explanations underlying particular conservation issues and also can help develop innovative solutions to ameliorate conservation problems.
Collapse
|
13
|
The Neural Correlates of Visual and Auditory Cross-Modal Selective Attention in Aging. Front Aging Neurosci 2020; 12:498978. [PMID: 33304265 PMCID: PMC7693624 DOI: 10.3389/fnagi.2020.498978] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Accepted: 10/27/2020] [Indexed: 11/13/2022] Open
Abstract
Age-related deficits in selective attention have been demonstrated to depend on the sensory modality through which targets and distractors are presented. Some of these investigations suggest a specific impairment of cross-modal auditory selective attention. For the first time, this study is taking on a whole brain approach while including a passive perception baseline, to investigate the neural underpinnings of selective attention across age groups, and taking the sensory modality of relevant and irrelevant (i.e., distracting) stimuli into account. Sixteen younger (mean age = 23.3 years) and 14 older (mean age = 65.3 years), healthy participants performed a series of delayed match-to-sample tasks, in which participants had to selectively attend to visual stimuli, selectively attend to auditory stimuli, or passively view and hear both types of stimuli, while undergoing 3T fMRI. The imaging analyses showed that areas recruited by cross-modal visual and auditory selective attention in both age groups included parts of the dorsal attention and frontoparietal control networks (i.e., intraparietal sulcus, insula, fusiform gyrus, anterior cingulate, and inferior frontal cortex). Most importantly, activation throughout the brain did not differ across age groups, suggesting intact brain function during cross-modal selective attention in older adults. Moreover, stronger brain activation during cross-modal visual vs. cross-modal auditory selective attention was found in both age groups, which is consistent with earlier accounts of visual dominance. In conclusion, these results do not support the hypothesized age-related deficit of cross-modal auditory selective attention. Instead, they suggest that the underlying neural correlates of cross-modal selective attention are similar in younger and older adults.
Collapse
|
14
|
Pain and Distraction According to Sensory Modalities: Current Findings and Future Directions. Pain Pract 2019; 19:686-702. [PMID: 31104345 DOI: 10.1111/papr.12799] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Accepted: 05/12/2019] [Indexed: 12/16/2022]
Abstract
BACKGROUND This review discusses the findings in the literature on pain and distraction tasks according to their sensory modality. Distraction tasks have been shown to reduce (experimentally induced) acute pain and chronic pain. This can be influenced by nature and by the sensory modalities used in the distraction tasks. Yet the effect on reducing pain according to the sensory modality of the distraction task has received little attention. METHODS A bibliographic search was performed in different databases. The studies will be systematized according to the sensory modality in which the distraction task was applied. RESULTS The analyzed studies with auditory distractors showed a reduction of acute pain in adults. However, these are not effective at healthy children and in adults with chronic pain. Visual distractors showed promising results in acute pain in adults and children. Similarly, tactile and mixed distractors decreased acute pain in adults. CONCLUSION Distraction tasks by diverse sensory modalities have a positive effect on decreasing the perception of acute pain in adults. Future studies are necessary given the paucity of research on this topic, particularly with tactile distractors (there is only one study). Finally, the most rigorous methodology and the use of ecological contexts are encouraged in future research.
Collapse
|
15
|
Bumblebees distinguish floral scent patterns, and can transfer these to corresponding visual patterns. Proc Biol Sci 2018; 285:20180661. [PMID: 29899070 PMCID: PMC6015847 DOI: 10.1098/rspb.2018.0661] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Accepted: 05/21/2018] [Indexed: 11/21/2022] Open
Abstract
Flowers act as multisensory billboards to pollinators by using a range of sensory modalities such as visual patterns and scents. Different floral organs release differing compositions and quantities of the volatiles contributing to floral scent, suggesting that scent may be patterned within flowers. Early experiments suggested that pollinators can distinguish between the scents of differing floral regions, but little is known about how these potential scent patterns might influence pollinators. We show that bumblebees can learn different spatial patterns of the same scent, and that they are better at learning to distinguish between flowers when the scent pattern corresponds to a matching visual pattern. Surprisingly, once bees have learnt the spatial arrangement of a scent pattern, they subsequently prefer to visit novel unscented flowers that have an identical arrangement of visual marks, suggesting that multimodal floral signals may exploit the mechanisms by which learnt information is stored by the bee.
Collapse
|
16
|
Selection for associative learning of color stimuli reveals correlated evolution of this learning ability across multiple stimuli and rewards. Evolution 2018; 72:1449-1459. [PMID: 29768649 PMCID: PMC6099215 DOI: 10.1111/evo.13498] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2017] [Accepted: 04/15/2018] [Indexed: 01/19/2023]
Abstract
We are only starting to understand how variation in cognitive ability can result from local adaptations to environmental conditions. A major question in this regard is to what extent selection on cognitive ability in a specific context affects that ability in general through correlated evolution. To address this question, we performed artificial selection on visual associative learning in female Nasonia vitripennis wasps. Using appetitive conditioning in which a visual stimulus was offered in association with a host reward, the ability to learn visual associations was enhanced within 10 generations of selection. To test for correlated evolution affecting this form of learning, the ability to readily form learned associations in females was also tested using an olfactory instead of a visual stimulus in the appetitive conditioning. Additionally, we assessed whether the improved associative learning ability was expressed across sexes by color-conditioning males with a mating reward. Both females and males from the selected lines consistently demonstrated an increased associative learning ability compared to the control lines, independent of learning context or conditioned stimulus. No difference in relative volume of brain neuropils was detected between the selected and control lines.
Collapse
|
17
|
Visual-auditory differences in duration discrimination depend on modality-specific, sensory-automatic temporal processing: Converging evidence for the validity of the Sensory-Automatic Timing Hypothesis. Q J Exp Psychol (Hove) 2018; 71:2364-2377. [PMID: 30362412 DOI: 10.1177/1747021817741611] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
The Sensory-Automatic Timing Hypothesis assumes visual-auditory differences in duration discrimination to originate from sensory-automatic temporal processing. Although temporal discrimination of extremely brief intervals in the range of tens-of-milliseconds is predicted to depend mainly on modality-specific, sensory-automatic temporal processing, duration discrimination of longer intervals is predicted to require more and more amodal, higher order cognitive resources and decreasing input from the sensory-automatic timing system with increasing interval duration. In two duration discrimination experiments with sensory modality as a within- and a between-subjects variable, respectively, we tested two decisive predictions derived from the Sensory-Automatic Timing Hypothesis: (1) visual-auditory differences in duration discrimination were expected to be larger for brief intervals in the tens-of-milliseconds range than for longer ones, and (2) visual-auditory differences in duration discrimination of longer intervals should disappear when statistically controlled for modality-specific input from the sensory-automatic timing system. In both experiments, visual-auditory differences in duration discrimination were larger for the brief than for the longer intervals. Furthermore, visual-auditory differences observed with longer intervals disappeared when statistically controlled for modality-specific input from the sensory-automatic timing system. Thus, our findings clearly confirmed the validity of the Sensory-Automatic Timing Hypothesis.
Collapse
|
18
|
New Breakthroughs in Understanding the Role of Functional Interactions between the Neocortex and the Claustrum. J Neurosci 2017; 37:10877-10881. [PMID: 29118217 DOI: 10.1523/jneurosci.1837-17.2017] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2017] [Revised: 09/29/2017] [Accepted: 09/29/2017] [Indexed: 01/21/2023] Open
Abstract
Almost all areas of the neocortex are connected with the claustrum, a nucleus located between the neocortex and the striatum, yet the functions of corticoclaustral and claustrocortical connections remain largely obscure. As major efforts to model the neocortex are currently underway, it has become increasingly important to incorporate the corticoclaustral system into theories of cortical function. This Mini-Symposium was motivated by a series of recent studies which have sparked new hypotheses regarding the function of claustral circuits. Anatomical, ultrastructural, and functional studies indicate that the claustrum is most highly interconnected with prefrontal cortex, suggesting important roles in higher cognitive processing, and that the organization of the corticoclaustral system is distinct from the driver/modulator framework often used to describe the corticothalamic system. Recent findings supporting roles in detecting novel sensory stimuli, directing attention and setting behavioral states, were the subject of the Mini-Symposium at the 2017 Society for Neuroscience Annual Meeting.
Collapse
|
19
|
Effects of preference and sensory modality on behavioural reaction in patients with disorders of consciousness. Brain Inj 2017; 31:1307-1311. [PMID: 28534673 DOI: 10.1080/02699052.2017.1306108] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Reliable evaluation of patients with unresponsive wakefulness syndrome (UWS) or in a minimally conscious state (MCS) remains a major challenge. It has been suggested that the expression of residual cerebral function could be improved by allowing patients to listen to their favourite music. However, the potential effect of music on behavioural responsiveness, as well as the effect of preferred stimuli in other sensory modalities (e.g. olfaction), remain poorly understood. OBJECTIVE The aim of our study was to investigate the effect of sensory modality (auditory versus olfactory) and preference (preferred versus neutral) of the test stimuli on patients' subsequent performance on the Coma Recovery Scale-Revised (CRS-R). RESEARCH DESIGN Within-subject design because of inter-individual differences between patients. METHODS AND PROCEDURES We studied four items from the CRS-R (visual pursuit using a mirror, auditory localization of the own name and two movements to command) in 13 patients (7 MCS; 6 UWS). MAIN OUTCOMES AND RESULTS Auditory stimuli triggered higher responsiveness compared to olfactory stimuli, and preferred stimuli were followed by higher scores than did neutral stimuli. CONCLUSIONS Findings suggest that preferred auditory stimuli at the bedside contribute to the expression of residual function and could improve the diagnostic assessment.
Collapse
|
20
|
Do you remember where sounds, pictures and words came from? The role of the stimulus format in object location memory. Memory 2017; 25:1340-1346. [PMID: 28287018 DOI: 10.1080/09658211.2017.1300668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and
Collapse
|
21
|
Selective Attention and Sensory Modality in Aging: Curses and Blessings. Front Hum Neurosci 2016; 10:147. [PMID: 27064763 PMCID: PMC4814507 DOI: 10.3389/fnhum.2016.00147] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2016] [Accepted: 03/21/2016] [Indexed: 11/13/2022] Open
Abstract
The notion that selective attention is compromised in older adults as a result of impaired inhibitory control is well established. Yet it is primarily based on empirical findings covering the visual modality. Auditory and especially, cross-modal selective attention are remarkably underexposed in the literature on aging. In the past 5 years, we have attempted to fill these voids by investigating performance of younger and older adults on equivalent tasks covering all four combinations of visual or auditory target, and visual or auditory distractor information. In doing so, we have demonstrated that older adults are especially impaired in auditory selective attention with visual distraction. This pattern of results was not mirrored by the results from our psychophysiological studies, however, in which both enhancement of target processing and suppression of distractor processing appeared to be age equivalent. We currently conclude that: (1) age-related differences of selective attention are modality dependent; (2) age-related differences of selective attention are limited; and (3) it remains an open question whether modality-specific age differences in selective attention are due to impaired distractor inhibition, impaired target enhancement, or both. These conclusions put the longstanding inhibitory deficit hypothesis of aging in a new perspective.
Collapse
|
22
|
PowerPoint presentation in learning physiology by undergraduates with different learning styles. ADVANCES IN PHYSIOLOGY EDUCATION 2015; 39:367-371. [PMID: 26628661 DOI: 10.1152/advan.00119.2015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
PowerPoint presentations (PPTs) have become routine in medical colleges because of their flexible and varied presentation capabilities. Research indicates that students prefer PPTs over the chalk-and-talk method, and there is a lot of debate over advantages and disadvantages of PPTs. However, there is no clear evidence that PPTs improve student learning/performance. Furthermore, there are a variety of learning styles with sex differences in classrooms. It is the responsibility of teacher/facilitator and student to be aware of learning style preferences to improve learning. The present study asked the following research question: do PPTs equally affect the learning of students with different learning styles in a mixed sex classroom? After we assessed students' predominant learning style according to the sensory modality that one most prefers to use when learning, a test was conducted before and after a PPT to assess student performance. The results were analyzed using Student's t-test and ANOVA with a Bonferroni post hoc test. A z-test showed no sex differences in preferred learning styles. There was significant increase in posttest performance compared with that of the pretest in all types of learners of both sexes. There was also a nonsignificant relationship among sex, learning style, and performance after the PPT. A PPT is equally effective for students with different learning style preferences and supports mixed sex classrooms.
Collapse
|
23
|
Precision and Bias in Approximate Numerical Judgment in Auditory, Tactile, and Cross-modal Presentation. Perception 2015; 45:56-70. [PMID: 26562851 DOI: 10.1177/0301006615596888] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Many studies have claimed that the numerosity of any set of discrete elements can be depicted by a genuinely abstract number representation, irrespective of whether they are presented in a visual, auditory, or tactile modality. However, in behavioral studies, some inconsistencies have been observed in the performance of number comparisons among different modalities. In this study, we have tested whether numerical comparisons of auditory, tactile, and cross-modal presentations would differ under adequate control of stimulus presentation, and, if so, how they would differ. The unimodal and cross-modal stimuli pairs were presented in sequential manner. We measured the Weber fractions (i.e., precision) and points of subjective equality (i.e., accuracy) of numerical discriminations in auditory, tactile, and crossmodal conditions. The results showed that the Weber fractions are constant over standard stimuli, indicating that the Weber's law holds for the range of numerical values that was tested. Furthermore, the Weber fractions are consistent over unimodal and cross-modal comparisons, and this indicates that there is no additional noise involved in the cross-modal comparisons. Interestingly, the bias measure showed that the number of auditory stimuli is systematically overestimated compared with that of tactile stimuli.
Collapse
|
24
|
Visual-auditory differences in duration discrimination of intervals in the subsecond and second range. Front Psychol 2015; 6:1626. [PMID: 26579013 PMCID: PMC4620148 DOI: 10.3389/fpsyg.2015.01626] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2015] [Accepted: 10/08/2015] [Indexed: 12/02/2022] Open
Abstract
A common finding in time psychophysics is that temporal acuity is much better for auditory than for visual stimuli. The present study aimed to examine modality-specific differences in duration discrimination within the conceptual framework of the Distinct Timing Hypothesis. This theoretical account proposes that durations in the lower milliseconds range are processed automatically while longer durations are processed by a cognitive mechanism. A sample of 46 participants performed two auditory and visual duration discrimination tasks with extremely brief (50-ms standard duration) and longer (1000-ms standard duration) intervals. Better discrimination performance for auditory compared to visual intervals could be established for extremely brief and longer intervals. However, when performance on duration discrimination of longer intervals in the 1-s range was controlled for modality-specific input from the sensory-automatic timing mechanism, the visual-auditory difference disappeared completely as indicated by virtually identical Weber fractions for both sensory modalities. These findings support the idea of a sensory-automatic mechanism underlying the observed visual-auditory differences in duration discrimination of extremely brief intervals in the millisecond range and longer intervals in the 1-s range. Our data are consistent with the notion of a gradual transition from a purely modality-specific, sensory-automatic to a more cognitive, amodal timing mechanism. Within this transition zone, both mechanisms appear to operate simultaneously but the influence of the sensory-automatic timing mechanism is expected to continuously decrease with increasing interval duration.
Collapse
|
25
|
Petrosal ganglion: a more complex role than originally imagined. Front Physiol 2014; 5:474. [PMID: 25538627 PMCID: PMC4255496 DOI: 10.3389/fphys.2014.00474] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2014] [Accepted: 11/17/2014] [Indexed: 11/13/2022] Open
Abstract
The petrosal ganglion (PG) is a peripheral sensory ganglion, composed of pseudomonopolar sensory neurons that innervate the posterior third of the tongue and the carotid sinus and body. According to their electrical properties PG neurons can be ascribed to one of two categories: (i) neurons with action potentials presenting an inflection (hump) on its repolarizing phase and (ii) neurons with fast and brisk action potentials. Although there is some correlation between the electrophysiological properties and the sensory modality of the neurons in some species, no general pattern can be easily recognized. On the other hand, petrosal neurons projecting to the carotid body are activated by several transmitters, with acetylcholine and ATP being the most conspicuous in most species. Petrosal neurons are completely surrounded by a multi-cellular sheet of glial (satellite) cells that prevents the formation of chemical or electrical synapses between neurons. Thus, PG neurons are regarded as mere wires that communicate the periphery (i.e., carotid body) and the central nervous system. However, it has been shown that in other sensory ganglia satellite glial cells and their neighboring neurons can interact, partly by the release of chemical neuro-glio transmitters. This intercellular communication can potentially modulate the excitatory status of sensory neurons and thus the afferent discharge. In this mini review, we will briefly summarize the general properties of PG neurons and the current knowledge about the glial-neuron communication in sensory neurons and how this phenomenon could be important in the chemical sensory processing generated in the carotid body.
Collapse
|
26
|
Perceiving blocks of emotional pictures and sounds: effects on physiological variables. Front Hum Neurosci 2013; 7:295. [PMID: 23801957 PMCID: PMC3689025 DOI: 10.3389/fnhum.2013.00295] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2013] [Accepted: 06/04/2013] [Indexed: 11/13/2022] Open
Abstract
Most studies on physiological effects of emotion-inducing images and sounds examine stimulus locked variables reflecting a state of at most a few seconds. We here aimed to induce longer lasting emotional states using blocks of repetitive visual, auditory, and bimodal stimuli corresponding to specific valence and arousal levels. The duration of these blocks enabled us to reliably measure heart rate variability as a possible indicator of arousal. In addition, heart rate and skin conductance were determined without taking stimulus timing into account. Heart rate was higher for pleasant and low arousal stimuli compared to unpleasant and high arousal stimuli. Heart rate variability and skin conductance increased with arousal. Effects of valence and arousal on cardiovascular measures habituated or remained the same over 2-min intervals whereas the arousal effect on skin conductance increased. We did not find any effect of stimulus modality. Our results indicate that blocks of images and sounds of specific valence and arousal levels consistently influence different physiological parameters. These parameters need not be stimulus locked. We found no evidence for differences in emotion induction between visual and auditory stimuli, nor did we find bimodal stimuli to be more potent than unimodal stimuli. The latter could be (partly) due to the fact that our bimodal stimuli were not optimally congruent.
Collapse
|