1
|
Baths V, Jartarkar M, Sood S, Lewis AG, Ostarek M, Huettig F. Testing the involvement of low-level visual representations during spoken word processing with non-Western students and meditators practicing Sudarshan Kriya Yoga. Brain Res 2024; 1838:148993. [PMID: 38729334 DOI: 10.1016/j.brainres.2024.148993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 05/06/2024] [Accepted: 05/07/2024] [Indexed: 05/12/2024]
Abstract
Previous studies, using the Continuous Flash Suppression (CFS) paradigm, observed that (Western) university students are better able to detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Here we attempted to replicate this effect with non-Western university students in Goa (India). A second aim was to explore the performance of (non-Western) meditators practicing Sudarshan Kriya Yoga in Goa in the same task. Some previous literature suggests that meditators may excel in some tasks that tap visual attention, for example by exercising better endogenous and exogenous control of visual awareness than non-meditators. The present study replicated the finding that congruent spoken cue words lead to significantly higher detection sensitivity than incongruent cue words in non-Western university students. Our exploratory meditator group also showed this detection effect but both frequentist and Bayesian analyses suggest that the practice of meditation did not modulate it. Overall, our results provide further support for the notion that spoken words can activate low-level category-specific visual features that boost the basic capacity to detect the presence of a visual stimulus that has those features. Further research is required to conclusively test whether meditation can modulate visual detection abilities in CFS and similar tasks.
Collapse
Affiliation(s)
- Veeky Baths
- Cognitive Neuroscience Lab, BITS Pilani, K K Birla Goa Campus, Goa, India.
| | - Mayur Jartarkar
- Cognitive Neuroscience Lab, BITS Pilani, K K Birla Goa Campus, Goa, India
| | - Shagun Sood
- Cognitive Neuroscience Lab, BITS Pilani, K K Birla Goa Campus, Goa, India
| | - Ashley G Lewis
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Markus Ostarek
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Falk Huettig
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; University of Kaiserslautern-Landau, Center for Cognitive Science, Kaiserslautern, Germany; University of Lisbon, Faculty of Psychology, Lisbon, Portugal
| |
Collapse
|
2
|
Lee PS, Sewell DK. A revised diffusion model for conflict tasks. Psychon Bull Rev 2024; 31:1-31. [PMID: 37507646 PMCID: PMC10867079 DOI: 10.3758/s13423-023-02288-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/25/2023] [Indexed: 07/30/2023]
Abstract
The recently developed diffusion model for conflict tasks (DMC) Ulrich et al. (Cognitive Psychology, 78, 148-174, 2015) provides a good account of data from all standard conflict tasks (e.g., Stroop, Simon, and flanker tasks) within a common evidence accumulation framework. A central feature of DMC's processing dynamics is that there is an initial phase of rapid accumulation of distractor evidence that is then selectively withdrawn from the decision mechanism as processing continues. We argue that this assumption is potentially troubling because it could be viewed as implying qualitative changes in the representation of distractor information over the time course of processing. These changes suggest more than simple inhibition or suppression of distractor information, as they involve evidence produced by distractor processing "changing sign" over time. In this article, we (a) develop a revised DMC (RDMC) whose dynamics operate strictly within the limits of inhibition/suppression (i.e., evidence strength can change monotonically, but cannot change sign); (b) demonstrate that RDMC can predict the full range of delta plots observed in the literature (i.e., both positive-going and negative-going); and (c) show that the model provides excellent fits to Simon and flanker data used to benchmark the original DMC at both the individual and group level. Our model provides a novel account of processing differences across Simon and flanker tasks. Specifically, that they differ in how distractor information is processed on congruent trials, rather than incongruent trials: congruent trials in the Simon task show relatively slow attention shifting away from distractor information (i.e., location) while complete and rapid attention shifting occurs in the flanker task. Our new model highlights the importance of considering dynamic interactions between top-down goals and bottom-up stimulus effects in conflict processing.
Collapse
Affiliation(s)
- Ping-Shien Lee
- School of Psychology, University of Queensland, QLD 4072, St. Lucia, Australia.
| | - David K Sewell
- School of Psychology, University of Queensland, QLD 4072, St. Lucia, Australia
| |
Collapse
|
3
|
Canessa E, Chaigneau SE, Moreno S. Using agreement probability to study differences in types of concepts and conceptualizers. Behav Res Methods 2024; 56:93-112. [PMID: 36471211 DOI: 10.3758/s13428-022-02030-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/18/2022] [Indexed: 12/12/2022]
Abstract
Agreement probability p(a) is a homogeneity measure of lists of properties produced by participants in a Property Listing Task (PLT) for a concept. Agreement probability's mathematical properties allow a rich analysis of property-based descriptions. To illustrate, we use p(a) to delve into the differences between concrete and abstract concepts in sighted and blind populations. Results show that concrete concepts are more homogeneous within sighted and blind groups than abstract ones (i.e., exhibit a higher p(a) than abstract ones) and that concrete concepts in the blind group are less homogeneous than in the sighted sample. This supports the idea that listed properties for concrete concepts should be more similar across subjects due to the influence of visual/perceptual information on the learning process. In contrast, abstract concepts are learned based mainly on social and linguistic information, which exhibit more variability among people, thus, making the listed properties more dissimilar across subjects. Relative to abstract concepts, the difference in p(a) between sighted and blind is not statistically significant. Though this is a null result, and should be considered with care, it is expected because abstract concepts should be learned by paying attention to the same social and linguistic input in both, blind and sighted, and thus, there is no reason to expect that the respective lists of properties should differ. Finally, we used p(a) to classify concrete and abstract concepts with a good level of certainty. All these analyses suggest that p(a) can be fruitfully used to study data obtained in a PLT.
Collapse
Affiliation(s)
- Enrique Canessa
- Center for Cognition Research (CINCO), School of Psychology, Universidad Adolfo Ibáñez, Av. Presidente Errázuriz 3328, Las Condes, Santiago, Chile.
- Faculty of Engineering and Science, Universidad Adolfo Ibáñez, Av. P. Hurtado 750, Lote H, Viña del Mar, Chile.
| | - Sergio E Chaigneau
- Center for Cognition Research (CINCO), School of Psychology, Universidad Adolfo Ibáñez, Av. Presidente Errázuriz 3328, Las Condes, Santiago, Chile
- Center for Social and Cognitive Neuroscience, School of Psychology, Universidad Adolfo Ibáñez, Av. Presidente Errázuriz 3328, Las Condes, Santiago, Chile
| | - Sebastián Moreno
- Faculty of Engineering and Science, Universidad Adolfo Ibáñez, Av. P. Hurtado 750, Lote H, Viña del Mar, Chile
| |
Collapse
|
4
|
Xue J, Jiang T, Chen C, Murty VP, Li Y, Ding Z, Zhang M. The interactive effect of external rewards and self-determined choice on memory. PSYCHOLOGICAL RESEARCH 2023; 87:2101-2110. [PMID: 36869894 PMCID: PMC9984743 DOI: 10.1007/s00426-023-01807-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 02/10/2023] [Indexed: 03/05/2023]
Abstract
Both external motivational incentives (e.g., monetary reward) and internal motivational incentives (e.g., self-determined choice) have been found to promote memory, but much less is known about how these two types of incentives interact with each other to affect memory. The current study (N = 108) examined how performance-dependent monetary rewards affected the role of self-determined choice in memory performance, also known as the choice effect. Using a modified and better controlled version of the choice paradigm and manipulating levels of reward, we demonstrated an interactive effect between monetary reward and self-determined choice on 1-day delayed memory performance. Specifically, the choice effect on memory decreased when we introduced the performance-dependent external rewards. These results are discussed in terms of understanding how external and internal motivators interact to impact learning and memory.
Collapse
Affiliation(s)
- Jingming Xue
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, 100101, People's Republic of China
- Faculty of Psychology, Beijing Normal University, Beijing, 100875, People's Republic of China
| | - Ting Jiang
- Faculty of Psychology, Beijing Normal University, Beijing, 100875, People's Republic of China
| | - Chuansheng Chen
- Department of Psychological Science, University of California, Irvine, CA, 92697, USA
| | - Vishnu P Murty
- Department of Psychology, Temple University, Philadelphia, PA, 19122, USA
| | - Yuxin Li
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, 100101, People's Republic of China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, People's Republic of China
| | - Zhuolei Ding
- Faculty of Psychology, Beijing Normal University, Beijing, 100875, People's Republic of China
| | - Mingxia Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, 100101, People's Republic of China.
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, People's Republic of China.
- Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Rd., Beijing, 100101, People's Republic of China.
| |
Collapse
|
5
|
Liang X, Wu Z, Yue Z. The association of targets modulates the search efficiency in multitarget searches. Atten Percept Psychophys 2023; 85:1888-1904. [PMID: 37568033 DOI: 10.3758/s13414-023-02771-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/24/2023] [Indexed: 08/13/2023]
Abstract
Previous studies have found that distractors can affect visual search efficiency when associated with the target in a single-target search. However, multitarget searches are frequently necessary in daily life. In the present study, we examined how the association of targets in a multitarget search affected performance when searching for two targets simultaneously (Experiment 1). In addition, we explored whether the association affected switch cost (Experiment 2) and preparation cost (Experiment 3). Participants were required to learn associations between different colors or shapes and then performed feature search and conjunction search tasks. For all experiments, the results of search efficiency showed that for conjunction search, the search efficiency under the associated condition was significantly higher than that under the neutral condition. Similarly, the response times in the associated condition were significantly faster than those in the neutral condition under the conjunction search condition in Experiments 1 and 2. However, in Experiment 3, the response times in the associated condition were longer than those in the neutral condition. These results indicate that the association between targets can improve the efficiency of multitarget searches. Furthermore, associations can reduce the time spent searching for individual targets and the switch cost; however, the preparation cost increases.
Collapse
Affiliation(s)
- Xinxian Liang
- Department of Psychology, Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Sun Yat-sen University, Guangzhou, 510006, China
| | - Zehua Wu
- Department of Psychology, Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Sun Yat-sen University, Guangzhou, 510006, China
| | - Zhenzhu Yue
- Department of Psychology, Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Sun Yat-sen University, Guangzhou, 510006, China.
| |
Collapse
|
6
|
Dubova M, Goldstone RL. Carving joints into nature: reengineering scientific concepts in light of concept-laden evidence. Trends Cogn Sci 2023; 27:656-670. [PMID: 37173157 DOI: 10.1016/j.tics.2023.04.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 04/11/2023] [Accepted: 04/12/2023] [Indexed: 05/15/2023]
Abstract
A new wave of proposals suggests that scientists must reassess scientific concepts in light of accumulated evidence. However, reengineering scientific concepts in light of data is challenging because scientific concepts affect the evidence itself in multiple ways. Among other possible influences, concepts (i) prime scientists to overemphasize within-concept similarities and between-concept differences; (ii) lead scientists to measure conceptually relevant dimensions more accurately; (iii) serve as units of scientific experimentation, communication, and theory-building; and (iv) affect the phenomena themselves. When looking for improved ways to carve nature at its joints, scholars must take the concept-laden nature of evidence into account to avoid entering a vicious circle of concept-evidence mutual substantiation.
Collapse
Affiliation(s)
- Marina Dubova
- Cognitive Science Program, Indiana University, 1101 E. 10th Street, Bloomington, IN 47405, USA.
| | - Robert L Goldstone
- Cognitive Science Program, Indiana University, 1101 E. 10th Street, Bloomington, IN 47405, USA; Department of Psychological and Brain Sciences, Indiana University, 1101 E. 10th Street, Bloomington, IN 47405, USA
| |
Collapse
|
7
|
Yan C, de Lange FP, Richter D. Conceptual Associations Generate Sensory Predictions. J Neurosci 2023; 43:3733-3742. [PMID: 37059461 PMCID: PMC10198451 DOI: 10.1523/jneurosci.1874-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 02/10/2023] [Accepted: 03/06/2023] [Indexed: 04/16/2023] Open
Abstract
A crucial ability of the human brain is to learn and exploit probabilistic associations between stimuli to facilitate perception and behavior by predicting future events. Although studies have shown how perceptual relationships are used to predict sensory inputs, relational knowledge is often between concepts rather than percepts (e.g., we learned to associate cats with dogs, rather than specific images of cats and dogs). Here, we asked if and how sensory responses to visual input may be modulated by predictions derived from conceptual associations. To this end we exposed participants of both sexes to arbitrary word-word pairs (e.g., car-dog) repeatedly, creating an expectation of the second word, conditional on the occurrence of the first. In a subsequent session, we exposed participants to novel word-picture pairs, while measuring fMRI BOLD responses. All word-picture pairs were equally likely, but half of the pairs conformed to the previously formed conceptual (word-word) associations, whereas the other half violated this association. Results showed suppressed sensory responses throughout the ventral visual stream, including early visual cortex, to pictures that corresponded to the previously expected words compared with unexpected words. This suggests that the learned conceptual associations were used to generate sensory predictions that modulated processing of the picture stimuli. Moreover, these modulations were tuning specific, selectively suppressing neural populations tuned toward the expected input. Combined, our results suggest that recently acquired conceptual priors are generalized across domains and used by the sensory brain to generate category-specific predictions, facilitating processing of expected visual input.SIGNIFICANCE STATEMENT Perceptual predictions play a crucial role in facilitating perception and the integration of sensory information. However, little is known about whether and how the brain uses more abstract, conceptual priors to form sensory predictions. In our preregistered study, we show that priors derived from recently acquired arbitrary conceptual associations result in category-specific predictions that modulate perceptual processing throughout the ventral visual hierarchy, including early visual cortex. These results suggest that the predictive brain uses prior knowledge across various domains to modulate perception, thereby extending our understanding of the extensive role predictions play in perception.
Collapse
Affiliation(s)
- Chuyao Yan
- School of Psychology, Nanjing Normal University, Nanjing 210097, China
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6500 HB Nijmegen, The Netherlands
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6500 HB Nijmegen, The Netherlands
| | - David Richter
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6500 HB Nijmegen, The Netherlands
- Department of Experimental and Applied Psychology, Vrije Universiteit, 1081BT Amsterdam, The Netherlands
- Institute Brain and Behavior Amsterdam, 1081BT Amsterdam, The Netherlands
| |
Collapse
|
8
|
Niimi R, Saiki T, Yokosawa K. Auditory scene context facilitates visual recognition of objects in consistent visual scenes. Atten Percept Psychophys 2023; 85:1267-1275. [PMID: 36977906 DOI: 10.3758/s13414-023-02699-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/13/2023] [Indexed: 03/29/2023]
Abstract
Visual object recognition is facilitated by contextually consistent scenes in which the object is embedded. Scene gist representations extracted from the scenery backgrounds yield this scene consistency effect. Here we examined whether the scene consistency effect is specific to the visual domain or if it is crossmodal. Through four experiments, the accuracy of the naming of briefly presented visual objects was assessed. In each trial, a 4-s sound clip was presented and a visual scene containing the target object was briefly shown at the end of the sound clip. In a consistent sound condition, an environmental sound associated with the scene in which the target object typically appears was presented (e.g., forest noise for a bear target object). In an inconsistent sound condition, a sound clip contextually inconsistent with the target object was presented (e.g., city noise for a bear). In a control sound condition, a nonsensical sound (sawtooth wave) was presented. When target objects were embedded in contextually consistent visual scenes (Experiment 1: a bear in a forest background), consistent sounds increased object-naming accuracy. In contrast, sound conditions did not show a significant effect when target objects were embedded in contextually inconsistent visual scenes (Experiment 2: a bear in a pedestrian crossing background) or in a blank background (Experiments 3 and 4). These results suggested that auditory scene context has weak or no direct influence on visual object recognition. It seems likely that consistent auditory scenes indirectly facilitate visual object recognition by promoting visual scene processing.
Collapse
|
9
|
Tool use acquisition induces a multifunctional interference effect during object processing: evidence from the sensorimotor mu rhythm. Exp Brain Res 2023; 241:1145-1157. [PMID: 36920527 DOI: 10.1007/s00221-023-06595-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Accepted: 02/27/2023] [Indexed: 03/16/2023]
Abstract
A fundamental characteristic of human development is acquiring and accumulating tool use knowledge through observation and sensorimotor experience. Recent studies showed that, in children and adults, different action possibilities to grasp-to-move and grasp-to-use objects generate a conflict that extinguishes neural motor resonance phenomena during visual object processing. In this study, a training protocol coupled with EEG recordings was administered in virtual reality to healthy adults to evaluate whether a similar conflict occurs between novel tool use knowledge. Participants perceived and manipulated two novel 3D tools trained beforehand with either single or double-usage. A weaker reduction of mu-band (10-13 Hz) power, accompanied by a reduced inter-trial phase coherence, was recorded during the perception of the tool associated with the double-usage. These effects started within the first 200 ms of visual object processing and were predominantly recorded over the left motor system. Furthermore, interacting with the double usage tool delayed grasp-to-reach movements. The results highlight a multifunctional interference effect, such as tool use acquisition reduces the neural motor resonance phenomenon and inhibits the activation of the motor system during subsequent object recognition. These results imply that learned tool use information guides sensorimotor processes of objects.
Collapse
|
10
|
Stein T, Ciorli T, Otten M. Guns Are Not Faster to Enter Awareness After Seeing a Black Face: Absence of Race-Priming in a Gun/Tool Task During Continuous Flash Suppression. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2023; 49:405-414. [PMID: 35067115 DOI: 10.1177/01461672211067068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
In the Weapon Identification Task (WIT), Black faces prime the identification of guns compared with tools. We measured race-induced changes in visual awareness of guns and tools using continuous flash suppression (CFS). Eighty-four participants, primed with Black or Asian faces, indicated the location of a gun or tool target that was temporarily rendered invisible through CFS, which provides a sensitive measure of effects on early visual processing. The same participants also completed a standard (non-CFS) WIT. We replicated the standard race-priming effect in the WIT. In the CFS task, Black and Asian primes did not affect the time guns and tools needed to enter awareness. Thus, race priming does not alter early visual processing but does change the identification of guns and tools. This confirms that race-priming originates from later post-perceptual memory- or response-related processing.
Collapse
Affiliation(s)
- Timo Stein
- University of Amsterdam, The Netherlands
| | | | | |
Collapse
|
11
|
Kutlu E, Barry-Anwar R, Pestana Z, Keil A, Scott LS. A label isn't just a label: Brief training leads to label-dependent visuo-cortical processing in adults. Neuropsychologia 2023; 178:108443. [PMID: 36481257 DOI: 10.1016/j.neuropsychologia.2022.108443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 10/26/2022] [Accepted: 12/03/2022] [Indexed: 12/12/2022]
Abstract
The current study examines the extent to which hearing individual-level names (e.g., Jimmy) and category-level labels (e.g., Hitchel) paired with novel objects impacts neural responses across a brief (6 min) learning period. Event-related Potentials (ERPs) were recorded while adult participants (n = 44) viewed and heard exemplars of two different species of named novel objects. ERPs were examined for each labeling condition and compared across the first and second half of the learning trials (∼3 min/half). Mean amplitude decreased for the P1 and increased for the N170 from the first to the second half of trials. The decrease in P1 was right lateralized. In addition, the P1 amplitude recorded over right occipitotemporal regions was greater than left occipitotemporal areas, but only for objects paired with individual-level labels. Category-level labels did not show regional P1 differences. The N250 component was greatest over the right occipitotemporal region and was enhanced for objects labeled with individual-level relative to category-level names during the second half of trials. Overall, these findings highlight the unfolding of label-dependent visual processing across a short training period in adults. The results suggest that linguistic labels have an important, top-down impact, on visual processing and that label specificity shapes visuo-cortical responses within a 6-min learning period.
Collapse
Affiliation(s)
- Ethan Kutlu
- Psychological and Brain Sciences, University of Iowa, USA
| | | | | | - Andreas Keil
- Department of Psychology, University of Florida, USA
| | - Lisa S Scott
- Department of Psychology, University of Florida, USA.
| |
Collapse
|
12
|
Long-term memory representations for audio-visual scenes. Mem Cognit 2023; 51:349-370. [PMID: 36100821 PMCID: PMC9950240 DOI: 10.3758/s13421-022-01355-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2022] [Indexed: 11/08/2022]
Abstract
In this study, we investigated the nature of long-term memory representations for naturalistic audio-visual scenes. Whereas previous research has shown that audio-visual scenes are recognized more accurately than their unimodal counterparts, it remains unclear whether this benefit stems from audio-visually integrated long-term memory representations or a summation of independent retrieval cues. We tested two predictions for audio-visually integrated memory representations. First, we used a modeling approach to test whether recognition performance for audio-visual scenes is more accurate than would be expected from independent retrieval cues. This analysis shows that audio-visual integration is not necessary to explain the benefit of audio-visual scenes relative to purely auditory or purely visual scenes. Second, we report a series of experiments investigating the occurrence of study-test congruency effects for unimodal and audio-visual scenes. Most importantly, visually encoded information was immune to additional auditory information presented during testing, whereas auditory encoded information was susceptible to additional visual information presented during testing. This renders a true integration of visual and auditory information in long-term memory representations unlikely. In sum, our results instead provide evidence for visual dominance in long-term memory. Whereas associative auditory information is capable of enhancing memory performance, the long-term memory representations appear to be primarily visual.
Collapse
|
13
|
Williams JR, Markov YA, Tiurina NA, Störmer VS. What You See Is What You Hear: Sounds Alter the Contents of Visual Perception. Psychol Sci 2022; 33:2109-2122. [PMID: 36179072 DOI: 10.1177/09567976221121348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Visual object recognition is not performed in isolation but depends on prior knowledge and context. Here, we found that auditory context plays a critical role in visual object perception. Using a psychophysical task in which naturalistic sounds were paired with noisy visual inputs, we demonstrated across two experiments (young adults; ns = 18-40 in Experiments 1 and 2, respectively) that the representations of ambiguous visual objects were shifted toward the visual features of an object that were related to the incidental sound. In a series of control experiments, we found that these effects were not driven by decision or response biases (ns = 40-85) nor were they due to top-down expectations (n = 40). Instead, these effects were driven by the continuous integration of audiovisual inputs during perception itself. Together, our results demonstrate that the perceptual experience of visual objects is directly shaped by naturalistic auditory context, which provides independent and diagnostic information about the visual world.
Collapse
Affiliation(s)
| | - Yuri A Markov
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne (EPFL)
| | - Natalia A Tiurina
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne (EPFL)
| | - Viola S Störmer
- Department of Psychology, University of California San Diego.,Department of Brain and Psychological Sciences, Dartmouth College
| |
Collapse
|
14
|
Language Matters: Rethinking the Use of "Filler". Dermatol Surg 2022; 48:1058. [PMID: 36129220 DOI: 10.1097/dss.0000000000003580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
15
|
The action-sentence compatibility effect (ACE): Meta-analysis of a benchmark finding for embodiment. Acta Psychol (Amst) 2022; 230:103712. [PMID: 36103797 DOI: 10.1016/j.actpsy.2022.103712] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 08/12/2022] [Accepted: 08/15/2022] [Indexed: 11/20/2022] Open
Abstract
The embodied account of language comprehension has been one of the most influential theoretical developments in the recent decades addressing the question how humans comprehend and represent language. To examine its assumptions, many studies have made use of behavioral paradigms involving basic compatibility effects. The action-sentence compatibility effect (ACE) is one of the most influential of these compatibility effects and is the most widely cited evidence for the embodied account of language comprehension. However, recently there have been difficulties in extending or even in reliably replicating the ACE. The conflicting findings concerning the ACE and its extensions lead to the discussion of whether the ACE is indeed a reliable effect. In a first step we conducted a meta-analysis using a random-effects model. This analysis revealed a small but significant effect size of the ACE. Furthermore, the task-parameter Delay occurred as a factor of interest in whether the ACE appears with positive or negative effect direction. A second meta-analytic approach (Fisher's method) supports these findings. Additionally, an analysis of publication bias suggests that there is bias in the ACE literature. In post-hoc analyses of the recent multi-lab investigation of the ACE (Morey et al., 2021), evidence for individual differences in the ACE was found. However, further analyses indicate that these differences are likely due to item-specific variability and the specific way in which items were assigned to conditions in the counterbalancing lists.
Collapse
|
16
|
Mahmood MN. Diversity in dermatology: International medical graduates and their role in the diversity of a specialty. Clin Dermatol 2022; 40:549-553. [PMID: 35182709 DOI: 10.1016/j.clindermatol.2022.02.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
A diverse medical workforce improves patient care. Dermatology is the second-least diverse medical specialty in the United States, and many recent publications have discussed the different reasons and possible solutions to improve this disparity. A quarter of physicians in the United States are international medical graduates, which directly affects the cultural diversity in health care. Dermatology has the lowest percentage of international medical graduates in its active physician workforce. Among other measures, the inclusion of more international medical graduates in residency programs can help improve the diversity in this specialty and alleviate any disparities in dermatological care delivery in underserved communities.
Collapse
Affiliation(s)
- Muhammad N Mahmood
- Department of Laboratory Medicine & Pathology, University of Alberta Hospital, Edmonton, Alberta, Canada.
| |
Collapse
|
17
|
Glaser M, Knoos M, Schwan S. Localizing, describing, interpreting: effects of different audio text structures on attributing meaning to digital pictures. INSTRUCTIONAL SCIENCE 2022; 50:729-748. [PMID: 35971387 PMCID: PMC9366788 DOI: 10.1007/s11251-022-09593-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 06/07/2022] [Accepted: 06/28/2022] [Indexed: 06/15/2023]
Abstract
Based on previous research on multimedia learning and text comprehension, an eye-tracking study was conducted to examine the influence of audio text coherence on visual attention and memory in a multimedia learning situation with a focus on picture comprehension. Audio text coherence was manipulated by the type of LDI structure, that is, whether localization, description, and interpretation followed in immediate succession for each pictorial detail or whether localizations and description of details were separated from their interpretation. Results show that with a LDI integrated structure compared to a LDI separated structure the referred-to picture elements were fixated longer during interpretation parts, and linkages between descriptions and interpretations were better recalled and recognized. The effects on recall and recognition of linkages were fully mediated by fixation times. This pattern of results can be explained by an interplay between audio text coherence and dual coding processes. It points out the importance of local coherence and the provision of localization information in audio explanations as well as visual attention to allow for dual coding processes that can be used to better attribute meaning to picture details. Practical implications for the design of educational videos, audio texts on websites, and audio guides are discussed.
Collapse
Affiliation(s)
- Manuela Glaser
- Leibniz-Institut für Wissensmedien, Schleichstr. 6, 72076 Tuebingen, Germany
- Im Bruckenschlegel 7, 70186 Stuttgart, Germany
| | - Manuel Knoos
- Leibniz-Institut für Wissensmedien, Schleichstr. 6, 72076 Tuebingen, Germany
| | - Stephan Schwan
- Leibniz-Institut für Wissensmedien, Schleichstr. 6, 72076 Tuebingen, Germany
| |
Collapse
|
18
|
Skipper JI. A voice without a mouth no more: The neurobiology of language and consciousness. Neurosci Biobehav Rev 2022; 140:104772. [PMID: 35835286 DOI: 10.1016/j.neubiorev.2022.104772] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 05/18/2022] [Accepted: 07/05/2022] [Indexed: 11/26/2022]
Abstract
Most research on the neurobiology of language ignores consciousness and vice versa. Here, language, with an emphasis on inner speech, is hypothesised to generate and sustain self-awareness, i.e., higher-order consciousness. Converging evidence supporting this hypothesis is reviewed. To account for these findings, a 'HOLISTIC' model of neurobiology of language, inner speech, and consciousness is proposed. It involves a 'core' set of inner speech production regions that initiate the experience of feeling and hearing words. These take on affective qualities, deriving from activation of associated sensory, motor, and emotional representations, involving a largely unconscious dynamic 'periphery', distributed throughout the whole brain. Responding to those words forms the basis for sustained network activity, involving 'default mode' activation and prefrontal and thalamic/brainstem selection of contextually relevant responses. Evidence for the model is reviewed, supporting neuroimaging meta-analyses conducted, and comparisons with other theories of consciousness made. The HOLISTIC model constitutes a more parsimonious and complete account of the 'neural correlates of consciousness' that has implications for a mechanistic account of mental health and wellbeing.
Collapse
|
19
|
James LS, Baier AL, Page RA, Clements P, Hunter KL, Taylor RC, Ryan MJ. Cross-modal facilitation of auditory discrimination in a frog. Biol Lett 2022; 18:20220098. [PMID: 35765810 DOI: 10.1098/rsbl.2022.0098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Stimulation in one sensory modality can affect perception in a separate modality, resulting in diverse effects including illusions in humans. This can also result in cross-modal facilitation, a process where sensory performance in one modality is improved by stimulation in another modality. For instance, a simple sound can improve performance in a visual task in both humans and cats. However, the range of contexts and underlying mechanisms that evoke such facilitation effects remain poorly understood. Here, we demonstrated cross-modal stimulation in wild-caught túngara frogs, a species with well-studied acoustic preferences in females. We first identified that a combined visual and seismic cue (vocal sac movement and water ripple) was behaviourally relevant for females choosing between two courtship calls in a phonotaxis assay. We then found that this combined cross-modal stimulus rescued a species-typical acoustic preference in the presence of background noise that otherwise abolished the preference. These results highlight how cross-modal stimulation can prime attention in receivers to improve performance during decision-making. With this, we provide the foundation for future work uncovering the processes and conditions that promote cross-modal facilitation effects.
Collapse
Affiliation(s)
- Logan S James
- Department of Integrative Biology, University of Texas, Austin, TX 78712, USA.,Smithsonian Tropical Research Institute, Apartado 0843-03092, Balboa, Ancón, Republic of Panama
| | - A Leonie Baier
- Department of Integrative Biology, University of Texas, Austin, TX 78712, USA.,Smithsonian Tropical Research Institute, Apartado 0843-03092, Balboa, Ancón, Republic of Panama
| | - Rachel A Page
- Smithsonian Tropical Research Institute, Apartado 0843-03092, Balboa, Ancón, Republic of Panama
| | - Paul Clements
- Henson School of Technology, Salisbury University, 1101 Camden Ave, Salisbury, MD 21801, USA
| | - Kimberly L Hunter
- Department of Biological Sciences, Salisbury University, 1101 Camden Ave, Salisbury, MD 21801, USA
| | - Ryan C Taylor
- Smithsonian Tropical Research Institute, Apartado 0843-03092, Balboa, Ancón, Republic of Panama.,Department of Biological Sciences, Salisbury University, 1101 Camden Ave, Salisbury, MD 21801, USA
| | - Michael J Ryan
- Department of Integrative Biology, University of Texas, Austin, TX 78712, USA.,Smithsonian Tropical Research Institute, Apartado 0843-03092, Balboa, Ancón, Republic of Panama
| |
Collapse
|
20
|
Skocypec RM, Peterson MA. Semantic Expectation Effects on Object Detection: Using Figure Assignment to Elucidate Mechanisms. Vision (Basel) 2022; 6:vision6010019. [PMID: 35324604 PMCID: PMC8953613 DOI: 10.3390/vision6010019] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 03/02/2022] [Accepted: 03/15/2022] [Indexed: 11/16/2022] Open
Abstract
Recent evidence suggesting that object detection is improved following valid rather than invalid labels implies that semantics influence object detection. It is not clear, however, whether the results index object detection or feature detection. Further, because control conditions were absent and labels and objects were repeated multiple times, the mechanisms are unknown. We assessed object detection via figure assignment, whereby objects are segmented from backgrounds. Masked bipartite displays depicting a portion of a mono-oriented object (a familiar configuration) on one side of a central border were shown once only for 90 or 100 ms. Familiar configuration is a figural prior. Accurate detection was indexed by reports of an object on the familiar configuration side of the border. Compared to control experiments without labels, valid labels improved accuracy and reduced response times (RTs) more for upright than inverted objects (Studies 1 and 2). Invalid labels denoting different superordinate-level objects (DSC; Study 1) or same superordinate-level objects (SSC; Study 2) reduced accuracy for upright displays only. Orientation dependency indicates that effects are mediated by activated object representations rather than features which are invariant over orientation. Following invalid SSC labels (Study 2), accurate detection RTs were longer than control for both orientations, implicating conflict between semantic representations that had to be resolved before object detection. These results demonstrate that object detection is not just affected by semantics, it entails semantics.
Collapse
Affiliation(s)
- Rachel M. Skocypec
- Visual Perception Lab, Department of Psychology, School of Mind, Brain and Behavior, University of Arizona, Tucson, AZ 85721, USA
- Cognitive Science Program, School of Mind, Brain and Behavior, University of Arizona, Tucson, AZ 85721, USA
- Correspondence: (R.M.S.); (M.A.P.)
| | - Mary A. Peterson
- Visual Perception Lab, Department of Psychology, School of Mind, Brain and Behavior, University of Arizona, Tucson, AZ 85721, USA
- Cognitive Science Program, School of Mind, Brain and Behavior, University of Arizona, Tucson, AZ 85721, USA
- Correspondence: (R.M.S.); (M.A.P.)
| |
Collapse
|
21
|
Pillaud N, Ric F. Generalized Approach/Avoidance Responses to Degraded Affective Stimuli: An Informational Account. SOCIAL COGNITION 2022. [DOI: 10.1521/soco.2022.40.1.29] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Two studies tested whether affective stimuli presented auditorily spontaneously trigger approach/avoidance reactions toward neutral visual stimuli. Contrary to hypotheses, Experiment 1 revealed that when the target was present, participants responded faster after positive (vs. negative) stimuli, and faster to the absence of the target following negative (vs. positive) stimuli, whatever the response modality (i.e., approach/avoidance). Instructions were to approach/avoid stimuli depending on whether a target was presented or not presented. We proposed that affective stimuli were used in this study as information about the presence/absence of the target. In Experiment 2, we replicated the results of Experiment 1 when participants responded to the presence/absence of the target, whereas an approach/avoidance compatibility effect was observed when each response modality was associated with a target. These results indicate that affective stimuli influence approach/avoidance across perceptual modalities and suggest that the link between affective stimuli and behavioral tendencies could be mediated by informational value of affect.
Collapse
|
22
|
Lin HP, Kuhlen AK, Melinger A, Aristei S, Abdel Rahman R. Concurrent semantic priming and lexical interference for close semantic relations in blocked-cyclic picture naming: Electrophysiological signatures. Psychophysiology 2021; 59:e13990. [PMID: 34931331 DOI: 10.1111/psyp.13990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 10/27/2021] [Accepted: 11/30/2021] [Indexed: 11/29/2022]
Abstract
In the present study, we employed event-related brain potentials to investigate the effects of semantic similarity on different planning stages during language production. We manipulated semantic similarity by controlling feature overlap within taxonomical hierarchies. In a blocked-cyclic naming task, participants named pictures in repeated cycles, blocked in semantically close, distant, or unrelated conditions. Only closely related items, but not distantly related items, induced semantic blocking effects. In the first presentation cycle, naming was facilitated, and amplitude modulations in the N1 component around 140-180 ms post-stimulus onset predicted this behavioral facilitation. In contrast, in later cycles, naming was delayed, and a negative-going posterior amplitude modulation around 250-350 ms post-stimulus onset predicted this interference. These findings indicate easier object recognition or identification underlying initial facilitation and increased difficulties during lexical selection. The N1 modulation was reduced but persisted in later cycles in which interference dominated, and the posterior negativity was also present in cycle 1 in which facilitation dominated, demonstrating concurrent effects of conceptual priming and lexical interference in all naming cycles. Our assumptions about the functional role these two opposing forces play in producing semantic context effects are further supported by the finding that the joint modulation of these two ERPs on naming latency exclusively emerged when naming closely related, but not unrelated items. The current findings demonstrate that close relations, but not distant taxonomic relations, induce stronger semantic blocking effects, and that temporally overlapping electrophysiological signatures reflect a trade-off between facilitatory priming and interfering lexical competition.
Collapse
Affiliation(s)
- Hsin-Pei Lin
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Anna K Kuhlen
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| | | | - Sabrina Aristei
- Department of Behavioral and Cognitive Sciences, Université du Luxembourg, Esch-sur-Alzette, Luxembourg
| | - Rasha Abdel Rahman
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
23
|
Sklar AY, Kardosh R, Hassin RR. From non-conscious processing to conscious events: a minimalist approach. Neurosci Conscious 2021; 2021:niab026. [PMID: 34676105 PMCID: PMC8524171 DOI: 10.1093/nc/niab026] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 06/23/2021] [Accepted: 10/11/2021] [Indexed: 01/22/2023] Open
Abstract
The minimalist approach that we develop here is a framework that allows to appreciate how non-conscious processing and conscious contents shape human cognition, broadly defined. It is composed of three simple principles. First, cognitive processes are inherently non-conscious, while their inputs and (interim) outputs may be consciously experienced. Second, non-conscious processes and elements of the cognitive architecture prioritize information for conscious experiences. Third, conscious events are composed of series of conscious contents and non-conscious processes, with increased duration leading to more opportunity for processing. The narrowness of conscious experiences is conceptualized here as a solution to the problem of channeling the plethora of non-conscious processes into action and communication processes that are largely serial. The framework highlights the importance of prioritization for consciousness, and we provide an illustrative review of three main factors that shape prioritization-stimulus strength, motivational relevance and mental accessibility. We further discuss when and how this framework (i) is compatible with previous theories, (ii) enables new understandings of established findings and models, and (iii) generates new predictions and understandings.
Collapse
Affiliation(s)
- Asael Y Sklar
- Edmond & Lily Safra Center for Brain Sciences, The Hebrew University Edmond J. Safra Campus, Jerusalem 9190401, Israel
| | - Rasha Kardosh
- Psychology Department, The Hebrew University Mount Scopus, Jerusalem 91905, Israel
| | - Ran R Hassin
- James Marshall Chair of Psychology, Psychology Department & The Federmann Center for the Study of Rationality, The Hebrew University Mount Scopus, Jerusalem 91905, Israel
| |
Collapse
|
24
|
Linguistic labels cue biological motion perception and misperception. Sci Rep 2021; 11:17239. [PMID: 34446746 PMCID: PMC8390742 DOI: 10.1038/s41598-021-96649-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Accepted: 08/05/2021] [Indexed: 11/24/2022] Open
Abstract
Linguistic labels exert a particularly strong top-down influence on perception. The potency of this influence has been ascribed to their ability to evoke category-diagnostic features of concepts. In doing this, they facilitate the formation of a perceptual template concordant with those features, effectively biasing perceptual activation towards the labelled category. In this study, we employ a cueing paradigm with moving, point-light stimuli across three experiments, in order to examine how the number of biological motion features (form and kinematics) encoded in lexical cues modulates the efficacy of lexical top-down influence on perception. We find that the magnitude of lexical influence on biological motion perception rises as a function of the number of biological motion-relevant features carried by both cue and target. When lexical cues encode multiple biological motion features, this influence is robust enough to mislead participants into reporting erroneous percepts, even when a masking level yielding high performance is used.
Collapse
|
25
|
Paffen CLE, Sahakian A, Struiksma ME, Van der Stigchel S. Unpredictive linguistic verbal cues accelerate congruent visual targets into awareness in a breaking continuous flash suppression paradigm. Atten Percept Psychophys 2021; 83:2102-2112. [PMID: 33786749 PMCID: PMC8213547 DOI: 10.3758/s13414-021-02297-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/02/2021] [Indexed: 12/03/2022]
Abstract
One of the most influential ideas within the domain of cognition is that of embodied cognition, in which the experienced world is the result of an interplay between an organism's physiology, sensorimotor system, and its environment. An aspect of this idea is that linguistic information activates sensory representations automatically. For example, hearing the word 'red' would automatically activate sensory representations of this color. But does linguistic information prioritize access to awareness of congruent visual information? Here, we show that linguistic verbal cues accelerate matching visual targets into awareness by using a breaking continuous flash suppression paradigm. In a speeded reaction time task, observers heard spoken color labels (e.g., red) followed by colored targets that were either congruent (red), incongruent (green), or neutral (a neutral noncolor word) with respect to the labels. Importantly, and in contrast to previous studies investigating a similar question, the incidence of congruent trials was not higher than that of incongruent trials. Our results show that RTs were selectively shortened for congruent verbal-visual pairings, and that this shortening occurred over a wide range of cue-target intervals. We suggest that linguistic verbal information preactivates sensory representations, so that hearing the word 'red' preactivates (visual) sensory information internally.
Collapse
Affiliation(s)
- Chris L E Paffen
- Department of Experimental Psychology & Helmholtz Institute, Utrecht University, Heidelberglaan 2, 3584 CS, Utrecht, the Netherlands.
| | - Andre Sahakian
- Department of Experimental Psychology & Helmholtz Institute, Utrecht University, Heidelberglaan 2, 3584 CS, Utrecht, the Netherlands
| | - Marijn E Struiksma
- Department of Language, Literature & Communication, Utrecht Institute of Linguistics OTS, Utrecht University, Utrecht, the Netherlands
| | - Stefan Van der Stigchel
- Department of Experimental Psychology & Helmholtz Institute, Utrecht University, Heidelberglaan 2, 3584 CS, Utrecht, the Netherlands
| |
Collapse
|
26
|
Brandman T, Avancini C, Leticevscaia O, Peelen MV. Auditory and Semantic Cues Facilitate Decoding of Visual Object Category in MEG. Cereb Cortex 2021; 30:597-606. [PMID: 31216008 DOI: 10.1093/cercor/bhz110] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 04/04/2019] [Accepted: 05/02/2019] [Indexed: 11/13/2022] Open
Abstract
Sounds (e.g., barking) help us to visually identify objects (e.g., a dog) that are distant or ambiguous. While neuroimaging studies have revealed neuroanatomical sites of audiovisual interactions, little is known about the time course by which sounds facilitate visual object processing. Here we used magnetoencephalography to reveal the time course of the facilitatory influence of natural sounds (e.g., barking) on visual object processing and compared this to the facilitatory influence of spoken words (e.g., "dog"). Participants viewed images of blurred objects preceded by a task-irrelevant natural sound, a spoken word, or uninformative noise. A classifier was trained to discriminate multivariate sensor patterns evoked by animate and inanimate intact objects with no sounds, presented in a separate experiment, and tested on sensor patterns evoked by the blurred objects in the 3 auditory conditions. Results revealed that both sounds and words, relative to uninformative noise, significantly facilitated visual object category decoding between 300-500 ms after visual onset. We found no evidence for earlier facilitation by sounds than by words. These findings provide evidence for a semantic route of facilitation by both natural sounds and spoken words, whereby the auditory input first activates semantic object representations, which then modulate the visual processing of objects.
Collapse
Affiliation(s)
- Talia Brandman
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Chiara Avancini
- Centre for Neuroscience in Education, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Olga Leticevscaia
- Cell and Developmental Biology, University College London, London WC1E 6BT, United Kingdom
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 HR Nijmegen, The Netherlands
| |
Collapse
|
27
|
Stein T, Peelen MV. Dissociating conscious and unconscious influences on visual detection effects. Nat Hum Behav 2021; 5:612-624. [PMID: 33398144 DOI: 10.1038/s41562-020-01004-5] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 10/21/2020] [Indexed: 01/28/2023]
Abstract
The scope of unconscious processing is highly debated, with recent studies showing that even high-level functions such as perceptual integration and category-based attention occur unconsciously. For example, upright faces that are suppressed from awareness through interocular suppression break into awareness more quickly than inverted faces. Similarly, verbal object cues boost otherwise invisible objects into awareness. Here, we replicate these findings, but find that they reflect a general difference in detectability not specific to interocular suppression. To dissociate conscious and unconscious influences on visual detection effects, we use an additional discrimination task to rule out conscious processes as a cause for these differences. Results from this detection-discrimination dissociation paradigm reveal that, while face orientation is processed unconsciously, category-based attention requires awareness. These findings provide insights into the function of conscious perception and offer an experimental approach for mapping out the scope and limits of unconscious processing.
Collapse
Affiliation(s)
- Timo Stein
- Brain and Cognition, Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands.
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
28
|
Synesthesia does not help to recover perceptual dominance following flash suppression. Sci Rep 2021; 11:7566. [PMID: 33828189 PMCID: PMC8027846 DOI: 10.1038/s41598-021-87223-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Accepted: 03/23/2021] [Indexed: 11/20/2022] Open
Abstract
Grapheme-colour synesthesia occurs when letters or numbers elicit an abnormal colour sensation (e.g., printed black letters are perceived as coloured). This phenomenon is typically reported following explicit presentation of graphemes. Very few studies have investigated colour sensations in synesthesia in the absence of visual awareness. We took advantage of the dichoptic flash suppression paradigm to temporarily render a stimulus presented to one eye invisible. Synesthetic alphanumeric and non-synesthetic stimuli were presented to 21 participants (11 synesthetes) in achromatic and chromatic experimental conditions. The test stimulus was first displayed to one eye and then masked by a sudden presentation of visual noise in the other eye (flash suppression). The time for an image to be re-perceived following the onset of the suppressive noise was calculated. Trials where there was no flash suppression performed but instead mimicked the perceptual suppression of the flash were also tested. Results showed that target detection by synesthetes was significantly better than by controls in the absence of flash suppression. No difference was found between the groups in the flash suppression condition. Our findings suggest that synesthesia is associated with enhanced perception for overt recognition, but does not provide an advantage in recovering from a perceptual suppression. Further studies are needed to investigate synesthesia in relation to visual awareness.
Collapse
|
29
|
Viganò S, Borghesani V, Piazza M. Symbolic categorization of novel multisensory stimuli in the human brain. Neuroimage 2021; 235:118016. [PMID: 33819609 DOI: 10.1016/j.neuroimage.2021.118016] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Revised: 03/15/2021] [Accepted: 03/17/2021] [Indexed: 10/21/2022] Open
Abstract
When primates (both human and non-human) learn to categorize simple visual or acoustic stimuli by means of non-verbal matching tasks, two types of changes occur in their brain: early sensory cortices increase the precision with which they encode sensory information, and parietal and lateral prefrontal cortices develop a categorical response to the stimuli. Contrary to non-human animals, however, our species mostly constructs categories using linguistic labels. Moreover, we naturally tend to define categories by means of multiple sensory features of the stimuli. Here we trained adult subjects to parse a novel audiovisual stimulus space into 4 orthogonal categories, by associating each category to a specific symbol. We then used multi-voxel pattern analysis (MVPA) to show that during a cross-format category repetition detection task three neural representational changes were detectable. First, visual and acoustic cortices increased both precision and selectivity to their preferred sensory feature, displaying increased sensory segregation. Second, a frontoparietal network developed a multisensory object-specific response. Third, the right hippocampus and, at least to some extent, the left angular gyrus, developed a shared representational code common to symbols and objects. In particular, the right hippocampus displayed the highest level of abstraction and generalization from a format to the other, and also predicted symbolic categorization performance outside the scanner. Taken together, these results indicate that when humans categorize multisensory objects by means of language the set of changes occurring in the brain only partially overlaps with that described by classical models of non-verbal unisensory categorization in primates.
Collapse
Affiliation(s)
- Simone Viganò
- Centre for Mind/Brain Sciences, University of Trento, Italy.
| | | | - Manuela Piazza
- Centre for Mind/Brain Sciences, University of Trento, Italy
| |
Collapse
|
30
|
Davis CP, Yee E. Building semantic memory from embodied and distributional language experience. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2021; 12:e1555. [PMID: 33533205 DOI: 10.1002/wcs.1555] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Revised: 09/07/2020] [Accepted: 01/10/2021] [Indexed: 01/06/2023]
Abstract
Humans seamlessly make sense of a rapidly changing environment, using a seemingly limitless knowledgebase to recognize and adapt to most situations we encounter. This knowledgebase is called semantic memory. Embodied cognition theories suggest that we represent this knowledge through simulation: understanding the meaning of coffee entails reinstantiating the neural states involved in touching, smelling, seeing, and drinking coffee. Distributional semantic theories suggest that we are sensitive to statistical regularities in natural language, and that a cognitive mechanism picks up on these regularities and transforms them into usable semantic representations reflecting the contextual usage of language. These appear to present contrasting views on semantic memory, but do they? Recent years have seen a push toward combining these approaches under a common framework. These hybrid approaches augment our understanding of semantic memory in important ways, but current versions remain unsatisfactory in part because they treat sensory-perceptual and distributional-linguistic data as interacting but distinct types of data that must be combined. We synthesize several approaches which, taken together, suggest that linguistic and embodied experience should instead be considered as inseparably entangled: just as sensory and perceptual systems are reactivated to understand meaning, so are experience-based representations endemic to linguistic processing; further, sensory-perceptual experience is susceptible to the same distributional principles as language experience. This conclusion produces a characterization of semantic memory that accounts for the interdependencies between linguistic and embodied data that arise across multiple timescales, giving rise to concept representations that reflect our shared and unique experiences. This article is categorized under: Psychology > Language Neuroscience > Cognition Linguistics > Language in Mind and Brain.
Collapse
Affiliation(s)
- Charles P Davis
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, USA.,Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs, Connecticut, USA
| | - Eiling Yee
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, USA.,Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs, Connecticut, USA
| |
Collapse
|
31
|
Hebert KP, Goldinger SD, Walenchok SC. Eye movements and the label feedback effect: Speaking modulates visual search via template integrity. Cognition 2021; 210:104587. [PMID: 33508577 DOI: 10.1016/j.cognition.2021.104587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Revised: 01/05/2021] [Accepted: 01/06/2021] [Indexed: 11/24/2022]
Abstract
The label-feedback hypothesis (Lupyan, 2012) proposes that language modulates low- and high-level visual processing, such as priming visual object perception. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, resulting in shorter response times (RTs) and higher accuracy. In the present investigation, we conceptually replicated and extended their study, using additional control conditions and recording eye movements during search. Our goal was to evaluate whether self-directed speech influences target locating (i.e. attentional guidance) or object perception (i.e., distractor rejection and target appreciation). In three experiments, during object search, people spoke target names, nonwords, irrelevant (absent) object names, or irrelevant (present) object names (all within-participants). Experiments 1 and 2 examined search RTs and accuracy: Speaking target names improved performance, without differences among the remaining conditions. Experiment 3 incorporated eye-tracking: Gaze fixation patterns suggested that language does not affect attentional guidance, but instead affects both distractor rejection and target appreciation. When search trials were conditionalized according to distractor fixations, language effects became more orderly: Search was fastest while people spoke target names, followed in linear order by the nonword, distractor-absent, and distractor-present conditions. We suggest that language affects template maintenance during search, allowing fluent differentiation of targets and distractors. Materials, data, and analyses can be retrieved here: https://osf.io/z9ex2/.
Collapse
|
32
|
|
33
|
Nejati V. Effect of stimulus dimension on perception and cognition. Acta Psychol (Amst) 2021; 212:103208. [PMID: 33220612 DOI: 10.1016/j.actpsy.2020.103208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2020] [Revised: 10/21/2020] [Accepted: 10/23/2020] [Indexed: 11/15/2022] Open
Abstract
Stimuli characteristics ha a decisive role in our perception and cognition. In the present study, we aimed to evaluate the effects of dimension of stimuli, two-dimensional (2D) versus three-dimensional (3D), on perception and working memory. In the first experiment, using the method of eye tracking, a higher blink rate, pupil size, and the number of saccade for three compared to 2D stimuli revealed a higher perceptual demand of 3D stimuli. In the second experiment, visual search task shows a higher response time for 3D stimuli and an equal performance with 2- and 3D stimuli in spatial working memory task. In the third experiment, four working memory tasks with high and low cognitive and perceptual load revealed 3D stimuli are memorized better in the both low and high load of working memory. We can conclude that 3D stimulus, compared 2D, imposes a higher load on perceptual system, but it is memorized better. It could be concluded that the phenomenon of filtering should occur in the early perceptual system for preventing overload.
Collapse
Affiliation(s)
- Vahid Nejati
- Department of Psychology, Faculty of Psychology and Educational Science, Shahid Beheshti University, Tehran, Iran.
| |
Collapse
|
34
|
Speaking for seeing: Sentence structure guides visual event apprehension. Cognition 2020; 206:104516. [PMID: 33228969 DOI: 10.1016/j.cognition.2020.104516] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Revised: 11/05/2020] [Accepted: 11/11/2020] [Indexed: 11/24/2022]
Abstract
Human experience and communication are centred on events, and event apprehension is a rapid process that draws on the visual perception and immediate categorization of event roles ("who does what to whom"). We demonstrate a role for syntactic structure in visual information uptake for event apprehension. An event structure foregrounding either the agent or patient was activated during speaking, transiently modulating the apprehension of subsequently viewed unrelated events. Speakers of Dutch described pictures with actives and passives (agent and patient foregrounding, respectively). First fixations on pictures of unrelated events that were briefly presented (for 300 ms) next were influenced by the active or passive structure of the previously produced sentence. Going beyond the study of how single words cue object perception, we show that sentence structure guides the viewpoint taken during rapid event apprehension.
Collapse
|
35
|
Borghi AM. A Future of Words: Language and the Challenge of Abstract Concepts. J Cogn 2020; 3:42. [PMID: 33134816 PMCID: PMC7583217 DOI: 10.5334/joc.134] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Accepted: 10/06/2020] [Indexed: 11/28/2022] Open
Abstract
The paper outlines one of the most important challenges that embodied and grounded theories need to face, i.e., that to explain how abstract concepts (abstractness) are acquired, represented, and used. I illustrate the view according to which abstract concepts are grounded not only in sensorimotor experiences, like concrete concepts, but also and to a greater extent in linguistic, social, and inner experiences. Specifically, I discuss the role played by metacognition, inner speech, social metacognition, and interoception. I also present evidence showing that the weight of linguistic, social, and inner experiences varies depending on the considered sub-kind of abstract concepts (e.g., mental states and spiritual concepts, numbers, emotions, social concepts). I argue that the challenge to explain abstract concepts representation implies the recognition of: a. the role of language, intended as inner and social tool, in shaping our mind; b. the importance of differences across languages; c. the existence of different kinds of abstract concepts; d. the necessity to adopt new paradigms, able to capture the use of abstract concepts in context and interactive situations. This challenge should be addressed with an integrated approach that bridges developmental, anthropological, and neuroscientific studies. This approach extends embodied and grounded views incorporating insights from distributional statistics views of meaning, from pragmatics and semiotics.
Collapse
Affiliation(s)
- Anna M. Borghi
- Sapienza University of Rome, Department of Dynamic and Clinical Psychology, IT
- Institute of Cognitive Sciences and Technologies, Italian National Research Council, IT
| |
Collapse
|
36
|
Towards Understanding the Task Dependency of Embodied Language Processing: The Influence of Colour During Language-Vision Interactions. J Cogn 2020; 3:41. [PMID: 33134815 PMCID: PMC7583718 DOI: 10.5334/joc.135] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
A main challenge for theories of embodied cognition is to understand the task dependency of embodied language processing. One possibility is that perceptual representations (e.g., typical colour of objects mentioned in spoken sentences) are not activated routinely but the influence of perceptual representation emerges only when context strongly supports their involvement in language. To explore this question, we tested the effects of colour representations during language processing in three visual-world eye-tracking experiments. On critical trials, participants listened to sentence-embedded words associated with a prototypical colour (e.g., ‘…spinach…’) while they inspected a visual display with four printed words (Experiment 1), coloured or greyscale line drawings (Experiment 2) and a ‘blank screen’ after a preview of coloured or greyscale line drawings (Experiment 3). Visual context always presented a word/object (e.g., frog) associated with the same prototypical colour (e.g. green) as the spoken target word and three distractors. When hearing spinach participants did not prefer the written word frog compared to other distractor words (Experiment 1). In Experiment 2, colour competitors attracted more overt attention compared to average distractors, but only for the coloured condition and not for greyscale trials. Finally, when the display was removed at the onset of the sentence, and in contrast to the previous blank-screen experiments with semantic competitors, there was no evidence of colour competition in the eye-tracking record (Experiment 3). These results fit best with the notion that the main role of perceptual representations in language processing is to contextualize language in the immediate environment.
Collapse
|
37
|
Lupyan G, Abdel Rahman R, Boroditsky L, Clark A. Effects of Language on Visual Perception. Trends Cogn Sci 2020; 24:930-944. [PMID: 33012687 DOI: 10.1016/j.tics.2020.08.005] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 08/22/2020] [Accepted: 08/25/2020] [Indexed: 11/24/2022]
Abstract
Does language change what we perceive? Does speaking different languages cause us to perceive things differently? We review the behavioral and electrophysiological evidence for the influence of language on perception, with an emphasis on the visual modality. Effects of language on perception can be observed both in higher-level processes such as recognition and in lower-level processes such as discrimination and detection. A consistent finding is that language causes us to perceive in a more categorical way. Rather than being fringe or exotic, as they are sometimes portrayed, we discuss how effects of language on perception naturally arise from the interactive and predictive nature of perception.
Collapse
Affiliation(s)
- Gary Lupyan
- University of Wisconsin-Madison, Madison, WI, USA.
| | | | | | - Andy Clark
- University of Sussex, Brighton, UK; Macquarie University, Sydney, Australia
| |
Collapse
|
38
|
Language is the missing link in action-perception coupling: an EEG study. Sci Rep 2020; 10:14587. [PMID: 32884072 PMCID: PMC7471270 DOI: 10.1038/s41598-020-71575-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Accepted: 07/23/2020] [Indexed: 11/28/2022] Open
Abstract
The paper reports an electrophysiological (EEG) study investigating how language is involved in perception–action relations in musically trained and untrained participants. Using an original backward priming paradigm, participants were exposed to muted point-light videos of violinists performing piano or forte nuances followed by a congruent vs. incongruent word. After the video presentation, participants were asked to decide whether the musician was playing a piano or forte musical nuance. EEG results showed a greater P200 event-related potential for trained participants at the occipital site, and a greater N400 effect for untrained participants at the central site. Musically untrained participants were more accurate when the word was semantically congruent with the gesture than when it was incongruent. Overall, language seems to influence the performance of untrained participants, for which perception–action couplings are less automatized.
Collapse
|
39
|
Schiller NO, Boutonnet BPA, De Heer Kloots MLS, Meelen M, Ruijgrok B, Cheng LLS. (Not so) Great Expectations: Listening to Foreign-Accented Speech Reduces the Brain's Anticipatory Processes. Front Psychol 2020; 11:2143. [PMID: 32982877 PMCID: PMC7479827 DOI: 10.3389/fpsyg.2020.02143] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Accepted: 07/31/2020] [Indexed: 11/13/2022] Open
Abstract
This study examines the effect of foreign-accented speech on the predictive ability of our brain. Listeners actively anticipate upcoming linguistic information in the speech signal so as to facilitate and reduce processing load. However, it is unclear whether or not listeners also do this when they are exposed to speech from non-native speakers. In the present study, we exposed native Dutch listeners to sentences produced by native and non-native speakers while measuring their brain activity using electroencephalography. We found that listeners’ brain activity differed depending on whether they listened to native or non-native speech. However, participants’ overall performance as measured by word recall rate was unaffected. We discussed the results in relation to previous findings as well as the automaticity of anticipation.
Collapse
Affiliation(s)
- Niels O Schiller
- Leiden University Centre for Linguistics, Leiden University, Leiden, Netherlands.,Leiden Institute for Brain and Cognition, Leiden, Netherlands
| | | | | | - Marieke Meelen
- Leiden University Centre for Linguistics, Leiden University, Leiden, Netherlands
| | - Bobby Ruijgrok
- Leiden University Centre for Linguistics, Leiden University, Leiden, Netherlands.,Leiden Institute for Brain and Cognition, Leiden, Netherlands
| | - Lisa L-S Cheng
- Leiden University Centre for Linguistics, Leiden University, Leiden, Netherlands.,Leiden Institute for Brain and Cognition, Leiden, Netherlands
| |
Collapse
|
40
|
Foerster FR, Borghi AM, Goslin J. Labels strengthen motor learning of new tools. Cortex 2020; 129:1-10. [DOI: 10.1016/j.cortex.2020.04.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Revised: 03/30/2020] [Accepted: 04/07/2020] [Indexed: 01/29/2023]
|
41
|
Suzuki TN. Other Species' Alarm Calls Evoke a Predator-Specific Search Image in Birds. Curr Biol 2020; 30:2616-2620.e2. [PMID: 32413306 DOI: 10.1016/j.cub.2020.04.062] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Revised: 04/21/2020] [Accepted: 04/23/2020] [Indexed: 10/24/2022]
Abstract
Many animals produce vocal alarm signals when they detect a predator, and heterospecific species sharing predators often eavesdrop on and respond to these calls [1]. Despite the widespread occurrence of interspecific eavesdropping in animals, its underlying cognitive process remains to be elucidated. If alarm calls, like human referential words, denote a specific predator type (e.g., "snake!"), then receivers may retrieve a mental image of the predator when hearing these calls [2-4]. Here, using a recently developed experimental paradigm [5], I test whether heterospecific alarm calls evoke a predator-specific visual search image in wild birds. During playback of snake-specific alarm calls produced by Japanese tits (Parus minor), coal tits (Periparus ater) approach a wooden stick being moved in a snake-like manner. However, coal tits do not approach the same stick when hearing other call types or if the stick's movement is dissimilar to that of a snake. Thus, Japanese tit snake alarms cause coal tits to specifically enhance visual attention to snakelike objects. These results provide experimental evidence for the evocation of visual search images by heterospecific alarm calls, highlighting the importance of integrating cross-modal information in interspecific eavesdropping.
Collapse
Affiliation(s)
- Toshitaka N Suzuki
- The Hakubi Center for Advanced Research, Kyoto University, Yoshida-honmachi, Kyoto 606-8501, Japan.
| |
Collapse
|
42
|
Yuan L, Xiang V, Crandall D, Smith L. Learning the generative principles of a symbol system from limited examples. Cognition 2020; 200:104243. [DOI: 10.1016/j.cognition.2020.104243] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2017] [Revised: 02/19/2020] [Accepted: 02/20/2020] [Indexed: 10/24/2022]
|
43
|
Montero-Melis G, Isaksson P, van Paridon J, Ostarek M. Does using a foreign language reduce mental imagery? Cognition 2020; 196:104134. [DOI: 10.1016/j.cognition.2019.104134] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Revised: 11/06/2019] [Accepted: 11/10/2019] [Indexed: 11/24/2022]
|
44
|
Fugate JMB, MacDonald C, O'Hare AJ. Emotion Words' Effect on Visual Awareness and Attention of Emotional Faces. Front Psychol 2020; 10:2896. [PMID: 32010012 PMCID: PMC6974626 DOI: 10.3389/fpsyg.2019.02896] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Accepted: 12/06/2019] [Indexed: 11/13/2022] Open
Abstract
To explore whether the meaning of a word changes visual processing of emotional faces (i.e., visual awareness and visual attention), we performed two complementary studies. In Experiment 1, we presented participants with emotion and control words and then tracked their visual awareness for two competing emotional faces using a binocular rivalry paradigm. Participants experienced the emotional face congruent with the emotion word for longer than a word-incongruent emotional face, as would be expected if the word was biasing awareness toward the (unseen) face. In Experiment 2, we similarly presented participants with emotion and control words prior to presenting emotional faces using a divided visual field paradigm. Emotion words were congruent with either the emotional face in the right or left visual field. After the presentation of faces, participants saw a dot in either the left or right visual field. Participants were slower to identify the location of the dot when it appeared in the same visual field as the emotional face congruent with the emotion word. The effect was limited to the left hemisphere (RVF), as would be expected for linguistic integration of the word with the face. Since the task was not linguistic, but rather a simple dot-probe task, participants were slower in their responses under these conditions because they likely had to disengage from the additional linguistic processing caused by the word-face integration. These findings indicate that emotion words bias visual awareness for congruent emotional faces, as well as shift attention toward congruent emotional faces.
Collapse
Affiliation(s)
- Jennifer M B Fugate
- Department of Psychology, University of Massachusetts Dartmouth, Dartmouth, MA, United States
| | - Cameron MacDonald
- Department of Psychology, University of Massachusetts Dartmouth, Dartmouth, MA, United States
| | - Aminda J O'Hare
- Department of Psychology, Weber State University, Ogden, UT, United States
| |
Collapse
|
45
|
Zettersten M, Lupyan G. Finding categories through words: More nameable features improve category learning. Cognition 2019; 196:104135. [PMID: 31821963 DOI: 10.1016/j.cognition.2019.104135] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2018] [Revised: 11/07/2019] [Accepted: 11/11/2019] [Indexed: 01/08/2023]
Abstract
What are the cognitive consequences of having a name for something? Having a word for a feature makes it easier to communicate about a set of exemplars belonging to the same category (e.g., "the red things"). But might it also make it easier to learn the category itself? Here, we provide evidence that the ease of learning category distinctions based on simple visual features is predicted from the ease of naming those features. Across seven experiments, participants learned categories composed of colors or shapes that were either easy or more difficult to name in English. Holding the category structure constant, when the underlying features of the category were easy to name, participants were faster and more accurate in learning the novel category. These results suggest that compact verbal labels may facilitate hypothesis formation during learning: it is easier to pose the hypothesis "it is about redness" than "it is about that pinkish-purplish color". Our results have consequences for understanding how developmental and cross-linguistic differences in a language's vocabulary affect category learning and conceptual development.
Collapse
Affiliation(s)
- Martin Zettersten
- Psychology Department, University of Wisconsin-Madison, 1202 W Johnson Street, Madison, WI 53706, USA.
| | - Gary Lupyan
- Psychology Department, University of Wisconsin-Madison, 1202 W Johnson Street, Madison, WI 53706, USA
| |
Collapse
|
46
|
Suzuki TN, Wheatcroft D, Griesser M. The syntax-semantics interface in animal vocal communication. Philos Trans R Soc Lond B Biol Sci 2019; 375:20180405. [PMID: 31735156 DOI: 10.1098/rstb.2018.0405] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Syntax (rules for combining words or elements) and semantics (meaning of expressions) are two pivotal features of human language, and interaction between them allows us to generate a limitless number of meaningful expressions. While both features were traditionally thought to be unique to human language, research over the past four decades has revealed intriguing parallels in animal communication systems. Many birds and mammals produce specific calls with distinct meanings, and some species combine multiple meaningful calls into syntactically ordered sequences. However, it remains largely unclear whether, like phrases or sentences in human language, the meaning of these call sequences depends on both the meanings of the component calls and their syntactic order. Here, leveraging recently demonstrated examples of meaningful call combinations, we introduce a framework for exploring the interaction between syntax and semantics (i.e. the syntax-semantic interface) in animal vocal sequences. We outline methods to test the cognitive mechanisms underlying the production and perception of animal vocal sequences and suggest potential evolutionary scenarios for syntactic communication. We hope that this review will stimulate phenomenological studies on animal vocal sequences as well as experimental studies on the cognitive processes, which promise to provide further insights into the evolution of language. This article is part of the theme issue 'What can animal communication teach us about human language?'
Collapse
Affiliation(s)
- Toshitaka N Suzuki
- Department of General Systems Studies, University of Tokyo, Tokyo, Japan.,The Hakubi Center for Advanced Research, Kyoto University, Kyoto, Japan.,Graduate School of Science, Kyoto University, Kyoto, Japan
| | - David Wheatcroft
- Department of Animal Ecology, Uppsala University, Uppsala, Sweden
| | - Michael Griesser
- Department of Evolutionary Biology and Environmental Studies, University of Zurich, Zurich, Switzerland
| |
Collapse
|
47
|
Cordeiro CM. A corpus-based approach to understanding market access in fisheries and aquaculture international business research: A systematic literature review. AQUACULTURE AND FISHERIES 2019. [DOI: 10.1016/j.aaf.2019.06.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
48
|
Cermeño-Aínsa S. The cognitive penetrability of perception: A blocked debate and a tentative solution. Conscious Cogn 2019; 77:102838. [PMID: 31678779 DOI: 10.1016/j.concog.2019.102838] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 10/03/2019] [Accepted: 10/12/2019] [Indexed: 11/16/2022]
Abstract
Despite the extensive body of psychological findings suggesting that cognition influences perception, the debate between defenders and detractors of the cognitive penetrability of perception persists. While detractors demand more strictness in psychological experiments, proponents consider that empirical studies show that cognitive penetrability occurs. These considerations have led some theorists to propose that the debate has reached a dead end. The issue about where perception ends and cognition begins is, I argue, one of the reasons why the debate is cornered. Another reason is the inability of psychological studies to present uncontroversial interpretations of the results obtained. To dive into other kinds of empirical sources is, therefore, required to clarify the debate. In this paper, I explain where the debate is blocked, and suggest that neuroscientific evidence together with the predictive coding account, might decant the discussion on the side of the penetrability thesis.
Collapse
Affiliation(s)
- Sergio Cermeño-Aínsa
- Departamento de Filosofía, Facultad de Filosofía y Letras, 08193 Cerdanyola del Vallés, Spain.
| |
Collapse
|
49
|
He C, Cheung OS. Category selectivity for animals and man-made objects: Beyond low- and mid-level visual features. J Vis 2019; 19:22. [DOI: 10.1167/19.12.22] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Chenxi He
- Department of Psychology, Division of Science, New York University Abu Dhabi, United Arab Emirates
| | - Olivia S. Cheung
- Department of Psychology, Division of Science, New York University Abu Dhabi, United Arab Emirates
| |
Collapse
|
50
|
Heyman T, Maerten AS, Vankrunkelsven H, Voorspoels W, Moors P. Sound-Symbolism Effects in the Absence of Awareness: A Replication Study. Psychol Sci 2019; 30:1638-1647. [PMID: 31638871 DOI: 10.1177/0956797619875482] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
People have been shown to link particular sounds with particular shapes. For instance, the round-sounding nonword bouba tends to be associated with curved shapes, whereas the sharp-sounding nonword kiki is deemed to be related to angular shapes. People's tendency to associate sounds and shapes has been observed across different languages. In the present study, we reexamined the claim by Hung, Styles, and Hsieh (2017) that such sound-shape mappings can occur before an individual becomes aware of the visual stimuli. More precisely, we replicated their first experiment, in which congruent and incongruent stimuli (e.g., bouba presented in a round shape or an angular shape, respectively) were rendered invisible through continuous flash suppression. The results showed that congruent combinations, on average, broke suppression faster than incongruent combinations, thus providing converging evidence for Hung and colleagues' assertions. Collectively, these findings now provide a solid basis from which to explore the boundary conditions of the effect.
Collapse
Affiliation(s)
- Tom Heyman
- Department of Experimental Psychology, KU Leuven
| | | | | | | | - Pieter Moors
- Department of Experimental Psychology, KU Leuven
| |
Collapse
|