1
|
Mental imagery in animals: Learning, memory, and decision-making in the face of missing information. Learn Behav 2020; 47:193-216. [PMID: 31228005 DOI: 10.3758/s13420-019-00386-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When we open our eyes, we see a world filled with objects and events. Yet, due to occlusion of some objects by others, we only have partial perceptual access to the events that transpire around us. I discuss the body of research on mental imagery in animals. I first cover prior studies of mental rotation in pigeons and imagery using working memory procedures first developed for human studies. Next, I discuss the seminal work on a type of learning called mediated conditioning in rats. I then provide more in-depth coverage of work from my lab suggesting that rats can use imagery to fill in missing details of the world that are expected but hidden from perception. We have found that rats make use of an active expectation (i.e., an image) of a hidden visual event. I describe the behavioral and neurobiological studies investigating the use of a mental image, its theoretical basis, and its connections to current human cognitive neuroscience research on episodic memory, imagination, and mental simulations. Collectively, the reviewed literature provides insight into the mechanisms that mediate the flexible use of an image during ambiguous situations. I position this work in the broader scientific and philosophical context surrounding the concept of mental imagery in human and nonhuman animals.
Collapse
|
2
|
|
3
|
Cobos PL, López FJ, Luque D. Interference between cues of the same outcome depends on the causal interpretation of the events. Q J Exp Psychol (Hove) 2018; 60:369-86. [PMID: 17366306 DOI: 10.1080/17470210601000961] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
In an interference-between-cues design, the expression of a learned Cue A ↠ Outcome 1 association has been shown to be impaired if another cue, B, is separately paired with the same outcome in a second learning phase. In the present study, we assessed whether this interference effect is mediated by participants’ previous causal knowledge. This was achieved by having participants learn in a diagnostic situation in Experiment 1a, and then by manipulating the causal order of the learning task in Experiments 1b and 2. If participants use their previous causal knowledge during the learning process, interference should only be observed in the diagnostic situation because only there we have a common cause (Outcome 1) of two disjoint effects, namely cues A and B. Consistent with this prediction, interference between cues was only found in Experiment 1a and in the diagnostic conditions of Experiments 1b and 2.
Collapse
Affiliation(s)
- Pedro L Cobos
- Departamento de Psicología Básica, University of Málaga, Málaga, Spain.
| | | | | |
Collapse
|
4
|
De Houwer J, Vandorpe S, Beckers T. Statistical contingency has a different impact on preparation judgements than on causal judgements. Q J Exp Psychol (Hove) 2018; 60:418-32. [PMID: 17366309 DOI: 10.1080/17470210601001084] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Previous studies on causal learning showed that judgements about the causal effect of a cue on an outcome depend on the statistical contingency between the presence of the cue and the outcome. We demonstrate that statistical contingency has a different impact on preparation judgements (i.e., judgements about the usefulness of responses that allow one to prepare for the outcome). Our results suggest that preparation judgements primarily reflect information about the outcome in prior situations that are identical to the test situation. These findings also add to previous evidence showing that people can use contingency information in a flexible manner depending on the type of test question.
Collapse
Affiliation(s)
- Jan De Houwer
- Department of Psychology, Ghent University, Ghent, Belgium.
| | | | | |
Collapse
|
5
|
Abstract
A major topic within human learning, the field of contingency judgement, began to emerge about 25 years ago following publication of an article on depressive realism by Alloy and Abramson (1979). Subsequently, associationism has been the dominant theoretical framework for understanding contingency learning but this has been challenged in recent years by an alternative cognitive or inferential approach. This article outlines the key conceptual differences between these approaches and summarizes some of the main methods that have been employed to distinguish between them.
Collapse
Affiliation(s)
- David R Shanks
- Department of Psychology, University College London. London. UK.
| |
Collapse
|
6
|
Fast CD, Flesher MM, Nocera NA, Fanselow MS, Blaisdell AP. Learning history and cholinergic modulation in the dorsal hippocampus are necessary for rats to infer the status of a hidden event. Hippocampus 2016; 26:804-15. [PMID: 26703089 PMCID: PMC4866895 DOI: 10.1002/hipo.22564] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/19/2015] [Indexed: 11/10/2022]
Abstract
Identifying statistical patterns between environmental stimuli enables organisms to respond adaptively when cues are later observed. However, stimuli are often obscured from detection, necessitating behavior under conditions of ambiguity. Considerable evidence indicates decisions under ambiguity rely on inference processes that draw on past experiences to generate predictions under novel conditions. Despite the high demand for this process and the observation that it deteriorates disproportionately with age, the underlying mechanisms remain unknown. We developed a rodent model of decision-making during ambiguity to examine features of experience that contribute to inference. Rats learned either a simple (positive patterning) or complex (negative patterning) instrumental discrimination between the illumination of one or two lights. During test, only one light was lit while the other relevant light was blocked from physical detection (covered by an opaque shield, rendering its status ambiguous). We found experience with the complex negative patterning discrimination was necessary for rats to behave sensitively to the ambiguous test situation. These rats behaved as if they inferred the presence of the hidden light, responding differently than when the light was explicitly absent (uncovered and unlit). Differential expression profiles of the immediate early gene cFos indicated hippocampal involvement in the inference process while localized microinfusions of the muscarinic antagonist, scopolamine, into the dorsal hippocampus caused rats to behave as if only one light was present. That is, blocking cholinergic modulation prevented the rat from inferring the presence of the hidden light. Collectively, these results suggest cholinergic modulation mediates recruitment of hippocampal processes related to past experiences and transfer of these processes to make decisions during ambiguous situations. Our results correspond with correlations observed between human brain function and inference abilities, suggesting our experiments may inform interventions to alleviate or prevent cognitive dysfunction. © 2015 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Cynthia D. Fast
- University of California, Los Angeles, Department of Psychology, Los Angeles, CA 90095-1563
| | - M. Melissa Flesher
- University of California, Los Angeles, Department of Psychology, Los Angeles, CA 90095-1563
| | - Nathanial A. Nocera
- University of California, Los Angeles, Department of Psychology, Los Angeles, CA 90095-1563
| | - Michael S. Fanselow
- University of California, Los Angeles, Department of Psychology, Los Angeles, CA 90095-1563
| | - Aaron P. Blaisdell
- University of California, Los Angeles, Department of Psychology, Los Angeles, CA 90095-1563
| |
Collapse
|
7
|
Rehder B. Independence and dependence in human causal reasoning. Cogn Psychol 2014; 72:54-107. [DOI: 10.1016/j.cogpsych.2014.02.002] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2013] [Revised: 02/05/2014] [Accepted: 02/11/2014] [Indexed: 10/25/2022]
|
8
|
Goedert KM, Grimm LR, Markman AB, Spellman BA. Priming interdependence affects processing of context information in causal inference--but not how you might think. Acta Psychol (Amst) 2014; 146:41-50. [PMID: 24374491 DOI: 10.1016/j.actpsy.2013.11.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2012] [Revised: 08/30/2013] [Accepted: 11/23/2013] [Indexed: 11/26/2022] Open
Abstract
Cultural mindset is related to performance on a variety of cognitive tasks. In particular, studies of both chronic and situationally-primed mindsets show that individuals with a relatively interdependent mindset (i.e., an emphasis on relationships and connections among individuals) are more sensitive to background contextual information than individuals with a more independent mindset. Two experiments tested whether priming cultural mindset would affect sensitivity to background causes in a contingency learning and causal inference task. Participants were primed (either independent or interdependent), and then saw complete contingency information on each of 12 trials for two cover stories in Experiment 1 (hiking causing skin rashes, severed brakes causing wrecked cars) and two additional cover stories in Experiment 2 (school deadlines causing stress, fertilizers causing plant growth). We expected that relative to independent-primed participants, those interdependent-primed would give more weight to the explicitly-presented data indicative of hidden alternative background causes, but they did not do so. In Experiment 1, interdependents gave less weight to the data indicative of hidden background causes for the car accident cover story and showed a decreased sensitivity to the contingencies for that story. In Experiment 2, interdependents placed less weight on the observable data for cover stories that supported more extra-experimental causes, while independents' sensitivity did not vary with these extra-experimental causes. Thus, interdependents were more sensitive to background causes not explicitly presented in the experiment, but this sensitivity hurt rather than improved their acquisition of the explicitly-presented contingency information.
Collapse
|
9
|
Abstract
Previous work has shown that predictions can be mediated by mechanistic beliefs. The present study shows that such mediation only occurs in the face of contradictory, and not corroborative, evidence. In four experiments, we presented participants with causal statements describing a common-cause structure (E1 ← C → E2). Then we informed them of the states of C and E1 and asked them to judge the likelihood of E2. In Experiments 1 and 2, we manipulated whether the mechanisms supporting the two effects were the same or different, and whether the evidence presented confirmed or contradicted the participants' expectations. The relation between the mechanisms only influenced predictions when evidence contradicted the expectations, but not when it was consistent. In Experiments 3 and 4, we used a common-cause structure with identical mechanisms. We manipulated the order in which predictions were made. When confirmatory predictions were made before contradictory predictions, mechanistic modulation was not observed in the confirmatory case. In contrast, the modulation was found when confirmatory predictions were made after contradictory ones. The results support the contradiction hypothesis that causal structure is revised during prediction, but only in the face of unexpected evidence.
Collapse
|
10
|
Park J, Sloman SA. Mechanistic beliefs determine adherence to the Markov property in causal reasoning. Cogn Psychol 2013; 67:186-216. [PMID: 24152569 DOI: 10.1016/j.cogpsych.2013.09.002] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2013] [Revised: 09/17/2013] [Accepted: 09/20/2013] [Indexed: 10/26/2022]
Abstract
What kind of information do people use to make predictions? Causal Bayes nets theory implies that people should follow structural constraints like the Markov property in the form of the screening-off rule, but previous work shows little evidence that people do. We tested six hypotheses that attempt to explain violations of screening off, some by asserting that people use mechanistic knowledge to infer additional latent structure. In three experiments, we manipulated whether the causal relations among variables within a causal structure were supported by the same or different mechanisms. The experiments differed in the type of causal structures (common cause vs. chain), the way that causal structures were presented (verbal description vs. observational learning), how the mechanisms were presented (explicit description vs. implicit description vs. visual hint), and the number of predictions requested (2 vs. 24). The results revealed that the screening-off rule was violated more often when the mechanisms were the same than when they were different. The findings suggest that people use knowledge about underlying mechanisms to infer latent structure for prediction.
Collapse
Affiliation(s)
- Juhwa Park
- Sungkyunkwan University, Interaction Science Institute, 3 ga, Myungryndong, Jongrogu, Seoul, South Korea.
| | | |
Collapse
|
11
|
Mitchell CJ, Griffiths O, More P, Lovibond PF. Contingency Bias in Probability Judgement May Arise from Ambiguity regarding Additional Causes. Q J Exp Psychol (Hove) 2013; 66:1675-86. [DOI: 10.1080/17470218.2012.752854] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
In laboratory contingency learning tasks, people usually give accurate estimates of the degree of contingency between a cue and an outcome. However, if they are asked to estimate the probability of the outcome in the presence of the cue, they tend to be biased by the probability of the outcome in the absence of the cue. This bias is often attributed to an automatic contingency detection mechanism, which is said to act via an excitatory associative link to activate the outcome representation at the time of testing. We conducted 3 experiments to test alternative accounts of contingency bias. Participants were exposed to the same outcome probability in the presence of the cue, but different outcome probabilities in the absence of the cue. Phrasing the test question in terms of frequency rather than probability and clarifying the test instructions reduced but did not eliminate contingency bias. However, removal of ambiguity regarding the presence of additional causes during the test phase did eliminate contingency bias. We conclude that contingency bias may be due to ambiguity in the test question, and therefore it does not require postulation of a separate associative link-based mechanism.
Collapse
Affiliation(s)
- Chris J. Mitchell
- School of Psychology, Plymouth University, Plymouth, UK
- School of Psychology, University of New South Wales, Sydney, Australia
| | - Oren Griffiths
- School of Psychology, University of New South Wales, Sydney, Australia
| | - Pranjal More
- School of Psychology, University of New South Wales, Sydney, Australia
| | - Peter F. Lovibond
- School of Psychology, University of New South Wales, Sydney, Australia
| |
Collapse
|
12
|
Rottman BM, Hastie R. Reasoning about causal relationships: Inferences on causal networks. Psychol Bull 2013; 140:109-39. [PMID: 23544658 DOI: 10.1037/a0031903] [Citation(s) in RCA: 58] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Over the last decade, a normative framework for making causal inferences, Bayesian Probabilistic Causal Networks, has come to dominate psychological studies of inference based on causal relationships. The following causal networks-[X→Y→Z, X←Y→Z, X→Y←Z]-supply answers for questions like, "Suppose both X and Y occur, what is the probability Z occurs?" or "Suppose you intervene and make Y occur, what is the probability Z occurs?" In this review, we provide a tutorial for how normatively to calculate these inferences. Then, we systematically detail the results of behavioral studies comparing human qualitative and quantitative judgments to the normative calculations for many network structures and for several types of inferences on those networks. Overall, when the normative calculations imply that an inference should increase, judgments usually go up; when calculations imply a decrease, judgments usually go down. However, 2 systematic deviations appear. First, people's inferences violate the Markov assumption. For example, when inferring Z from the structure X→Y→Z, people think that X is relevant even when Y completely mediates the relationship between X and Z. Second, even when people's inferences are directionally consistent with the normative calculations, they are often not as sensitive to the parameters and the structure of the network as they should be. We conclude with a discussion of productive directions for future research.
Collapse
Affiliation(s)
| | - Reid Hastie
- University of Chicago Booth School of Business
| |
Collapse
|
13
|
|
14
|
Hagmayer Y, Mayrhofer R. Hierarchical Bayesian models as formal models of causal reasoning. ARGUMENT & COMPUTATION 2013. [DOI: 10.1080/19462166.2012.700321] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
15
|
Causal structure learning over time: observations and interventions. Cogn Psychol 2011; 64:93-125. [PMID: 22155679 DOI: 10.1016/j.cogpsych.2011.10.003] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2011] [Accepted: 10/18/2011] [Indexed: 11/24/2022]
Abstract
Seven studies examined how people learn causal relationships in scenarios when the variables are temporally dependent - the states of variables are stable over time. When people intervene on X, and Y subsequently changes state compared to before the intervention, people infer that X influences Y. This strategy allows people to learn causal structures quickly and reliably when variables are temporally stable (Experiments 1 and 2). People use this strategy even when the cover story suggests that the trials are independent (Experiment 3). When observing variables over time, people believe that when a cause changes state, its effects likely change state, but an effect may change state due to an exogenous influence in which case its observed cause may not change state at the same time. People used this strategy to learn the direction of causal relations and a wide variety of causal structures (Experiments 4-6). Finally, considering exogenous influences responsible for the observed changes facilitates learning causal directionality (Experiment 7). Temporal reasoning may be the norm rather than the exception for causal learning and may reflect the way most events are experienced naturalistically.
Collapse
|
16
|
Abstract
Dealing with alternative causes is necessary to avoid making inaccurate causal inferences from covariation data. However, information about alternative causes is frequently unavailable, rendering them unobserved. The current article reviews the way in which current learning models deal, or could deal, with unobserved causes. A new model of causal learning, BUCKLE (bidirectional unobserved cause learning) extends existing models of causal learning by dynamically inferring information about unobserved, alternative causes. During the course of causal learning, BUCKLE continually computes the probability that an unobserved cause is present during a given observation and then uses the results of these inferences to learn the causal strengths of the unobserved as well as observed causes. The current results demonstrate that BUCKLE provides a better explanation of people's causal learning than the existing models.
Collapse
|