1
|
Madden GJ, Mahmoudi S, Brown K. Pavlovian learning and conditioned reinforcement. J Appl Behav Anal 2023; 56:498-519. [PMID: 37254881 PMCID: PMC10364091 DOI: 10.1002/jaba.1004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 05/09/2023] [Indexed: 06/01/2023]
Abstract
Conditioned reinforcers are widely used in applied behavior analysis. Basic research evidence reveals that Pavlovian learning plays an important role in the acquisition and efficacy of new conditioned-reinforcer functions. Thus, a better understanding of Pavlovian principles holds the promise of improving the efficacy of conditioned reinforcement in applied research and practice. This paper surveys how (and if) Pavlovian principles are presented in behavior-analytic textbooks; imprecisions and knowledge gaps within contemporary Pavlovian empirical findings are highlighted. Thereafter, six practical principles of Pavlovian conditioning are presented along with empirical support and knowledge gaps that should be filled by applied and translational behavior-analytic researchers. Innovative applications of these principles are outlined for research in language acquisition, token reinforcement, and self-control.
Collapse
Affiliation(s)
| | - Saba Mahmoudi
- Department of Psychology, Utah State University, Logan, UT, USA
| | - Katherine Brown
- Department of Psychology, Utah State University, Logan, UT, USA
| |
Collapse
|
2
|
Zentall TR. An Animal Model of Human Gambling Behavior. CURRENT RESEARCH IN BEHAVIORAL SCIENCES 2023. [DOI: 10.1016/j.crbeha.2023.100101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2023] Open
|
3
|
Toegel C, Holtyn AF, Toegel F, Perone M. The aversiveness of timeout from response-dependent and response-independent food deliveries as a function of delivery rate. J Exp Anal Behav 2022; 117:201-239. [PMID: 35141888 DOI: 10.1002/jeab.742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 01/01/2022] [Accepted: 01/18/2022] [Indexed: 11/09/2022]
Abstract
Seven experiments with rats assessed the aversiveness of timeout using punishment and avoidance procedures. Experiments 1 and 2 considered the contributions of stimulus change, suspending the response-reinforcer contingency, response prevention, the general disruption in the reinforcement schedule during time-in, and overall decreases in reinforcement. Results support the conclusion that response-contingent timeouts punish behavior because they are signaled periods during which an ongoing schedule of positive reinforcement is suspended. Experiments 3, 4, and 5 assessed effects of the reinforcement rate during time-in on the punitive efficacy of timeout and, for comparison, electric shock. Evidence for a direct relation between reinforcement rate and punitive efficacy was equivocal. In Experiments 6 and 7, responding avoided timeout from response-independent food deliveries. Responding was acquired rapidly when it avoided timeouts from free deliveries of pellets or a sucrose solution, but not when it avoided free deliveries of water. At steady-state, avoidance rates and proficiency were directly related to the rate of pellet or sucrose deliveries. The relation between the nature of the time-in environment and the aversiveness of timeout was clear in our avoidance experiments, but not in our punishment experiments. We discuss interpretive problems in evaluating the aversiveness of timeout in the punishment paradigm.
Collapse
Affiliation(s)
- Cory Toegel
- Department of Psychology, West Virginia University, Morgantown, WV
| | - August F Holtyn
- Department of Psychology, West Virginia University, Morgantown, WV
| | - Forrest Toegel
- Department of Psychology, West Virginia University, Morgantown, WV
| | - Michael Perone
- Department of Psychology, West Virginia University, Morgantown, WV
| |
Collapse
|
4
|
Shahan TA, Cunningham P. Conditioned reinforcement and information theory reconsidered. J Exp Anal Behav 2016; 103:405-18. [PMID: 25766452 DOI: 10.1002/jeab.142] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2014] [Accepted: 01/28/2015] [Indexed: 11/06/2022]
Abstract
The idea that stimuli might function as conditioned reinforcers because of the information they convey about primary reinforcers has a long history in the study of learning. However, formal application of information theory to conditioned reinforcement has been largely abandoned in modern theorizing because of its failures with respect to observing behavior. In this paper we show how recent advances in the application of information theory to Pavlovian conditioning offer a novel approach to conditioned reinforcement. The critical feature of this approach is that calculations of information are based on reductions of uncertainty about expected time to primary reinforcement signaled by a conditioned reinforcer. Using this approach, we show that previous failures of information theory with observing behavior can be remedied, and that the resulting framework produces predictions similar to Delay Reduction Theory in both observing-response and concurrent-chains procedures. We suggest that the similarity of these predictions might offer an analytically grounded reason for why Delay Reduction Theory has been a successful theory of conditioned reinforcement. Finally, we suggest that the approach provides a formal basis for the assertion that conditioned reinforcement results from Pavlovian conditioning and may provide an integrative approach encompassing both domains.
Collapse
|
5
|
Simultaneously Observing Concurrently-Available Schedules as a Means to Study the Near Miss Event in Simulated Slot Machine Gambling. PSYCHOLOGICAL RECORD 2014. [DOI: 10.1007/s40732-014-0095-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
6
|
Slezak JM, Anderson KG. Observing of chain-schedule stimuli. Behav Processes 2014; 105:19-27. [PMID: 24582929 DOI: 10.1016/j.beproc.2014.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2013] [Revised: 01/12/2014] [Accepted: 02/03/2014] [Indexed: 10/25/2022]
Abstract
A classical-conditioning account of the processes maintaining behavior under chained schedules entails a backward transmission of conditioned-reinforcement effects. Assessing this process in traditional chain schedules is limited because the response maintained by stimulus onset accompanied by each link in a chain schedule may also be maintained by the primary reinforcer. In the present experiment, an observing response was used to measure the conditioned-reinforcing effects of stimuli associated with a three-link chain variable-time (VT) food schedule, and resistance-to-change tests (extinction and prefeeding) were implemented to examine if a backward transmission of reinforcement effects occur. Four pigeons served as subjects. Observing was maintained by the production of stimuli correlated with links of a three-link chain VT schedule with the middle-link stimulus maintaining the highest rate of observing, followed by the initial-link stimulus and the terminal-link stimulus maintaining the lowest observing rate. Results from resistance-to-change tests of extinction and prefeeding were not supportive of a backward transmission of reinforcement effects and in general, the pattern of resistance-to-change was forward. Based on past and current research, it appears that a backward pattern of relative rate decreases in responses maintained by stimuli correlated with a chain schedule due to disruption (i.e., extinction and prefeeding) is not a ubiquitous process that is evident within different chain-schedule arrangements.
Collapse
|
7
|
Everly JB, Holtyn AF, Perone M. Behavioral functions of stimuli signaling transitions across rich and lean schedules of reinforcement. J Exp Anal Behav 2014; 101:201-14. [DOI: 10.1002/jeab.74] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2013] [Accepted: 12/16/2013] [Indexed: 11/11/2022]
|
8
|
Attitudes toward early detection of infection by the AIDS retrovirus among persons at high and low risk. ACTA ACUST UNITED AC 2013. [DOI: 10.3758/bf03337371] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
9
|
Troisi JR. Perhaps More Consideration of Pavlovian-Operant Interaction May Improve the Clinical Efficacy of Behaviorally Based Drug Treatment Programs. PSYCHOLOGICAL RECORD 2013; 63:863-894. [PMID: 25346551 PMCID: PMC4205955 DOI: 10.11133/j.tpr.2013.63.4.010] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
Drug abuse remains costly. Drug-related cues can evoke cue-reactivity and craving, contributing to relapse. The Pavlovian extinction-based cue-exposure therapy (CET) has not been very successful in treating drug abuse. A functional operant analysis of complex rituals involved in CET is outlined and reinterpreted as an operant heterogeneous chain maintained by observing responses, conditioned reinforcers, and discriminative stimuli. It is further noted that operant functions are not predicated on Pavlovian processes but can be influenced by them in contributing to relapse; several empirical studies from the animal and human literature highlight this view. Cue-reactivity evoked by Pavlovian processes is conceptualized as an operant establishing/motivating operation. CET may be more effective in incorporating an operant-based approach that takes into account the complexity of Pavlovian-operant interaction. Extinction of the operant chain coupled with the shaping of alternative behaviors is proposed as an integrated therapy. It is proposed that operant-based drug abuse treatments (contingency management, voucher programs, and the therapeutic work environment) might consider incorporating cue-reactivity, as establishing/motivating operations, to increase long-term success-a hybrid approach based on Pavlovian-operant interaction.
Collapse
|
10
|
Fantino E. Behavior analysis and behavioral ecology: A synergistic coupling. THE BEHAVIOR ANALYST 2012; 8:151-7. [PMID: 22478632 DOI: 10.1007/bf03393147] [Citation(s) in RCA: 78] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Recent trends in behavioral ecology and behavior analysis suggest that the two disciplines complement one another, underscoring the desirability of an integrated approach to behavior. Three examples from the foraging literature illustrate the potential value of an interdisciplinary approach. For example, a model of natural selection for foraging efficiency-optimal foraging theory-makes several predictions consistent with an hypothesis of a more proximate phenomenon, the reduction in delay to primary reinforcement. Not only are the ecological and behavior analytic approaches to behavior complementary, but each may provide insights into the operation of controlling variables in situations usually thought of as being the other's domain.
Collapse
|
11
|
Schlinger HD, Blakely E. A descriptive taxonomy of environmental operations and its implications for behavior analysis. THE BEHAVIOR ANALYST 2012; 17:43-57. [PMID: 22478172 DOI: 10.1007/bf03392652] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Environmental operations may be classified according to whether they have evocative or function-altering effects. Evocative events, such as the presentation of unconditioned and conditioned stimuli, establishing operations, and discriminative stimuli, serve to increase, decrease, or maintain the momentary frequency of behavior. Function-altering operations, such as operant and respondent conditioning, the correlation of stimuli, and the presentation of certain verbal stimuli, serve to increase, decrease, or maintain the evocative and function-altering (e.g., reinforcing or punishing) functions of other events. This paper expands upon the functional taxonomy of environmental events described by Michael (1993a). The resulting classification scheme should permit behavior analysts to more easily respond to similarities and differences between functional environmental events. This paper discusses implications of the suggested taxonomy for how behavior analysts talk about motivational variables, discriminative stimuli, the operant unit of analysis, and the distinction between operant and respondent conditioning.
Collapse
|
12
|
Jones J, Raiff BR, Dallery J. Nicotine's enhancing effects on responding maintained by conditioned reinforcers are reduced by pretreatment with mecamylamine, but not hexamethonium, in rats. Exp Clin Psychopharmacol 2010; 18:350-8. [PMID: 20695691 PMCID: PMC3626497 DOI: 10.1037/a0020601] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
Several studies have indicated that nicotine increases responding maintained by conditioned reinforcers. We assessed the effects of subcutaneous injections of 0.3 mg/kg nicotine and two nicotinic antagonists on responding maintained by conditioned and primary reinforcers and responding during extinction in 8 Long Evans rats. Mecamylamine, a central and peripheral nicotinic antagonist, and hexamethonium, a peripheral nicotinic antagonist, were administered prior to a subset of the experimental sessions. Nicotine selectively increased responding maintained by conditioned reinforcers and mecamylamine, but not hexamethonium, attenuated this effect. These results suggest that nicotine's enhancing effect on responding maintained by conditioned reinforcers is mediated in the central nervous system.
Collapse
Affiliation(s)
| | - Bethany R. Raiff
- University of Florida,National Development and Research Institutes
| | - Jesse Dallery
- University of Florida,National Development and Research Institutes
| |
Collapse
|
13
|
|
14
|
Abstract
Seventeen pigeons were exposed to a three-key discrete-trial procedure in which a peck on the lit center key produced food if, and only if, the left keylight was lit. The center key was illuminated by a peck on the lit right key. Of interest was whether subjects pecked the right key before or after the response-independent onset of the left keylight. Pecks on the right key after left-keylight onset suggest control of behavior by the left keylight-an establishing stimulus. In three experiments, the strength of center-keylight onset as conditioned reinforcer for a response on the right key was manipulated by altering the size of the reduction in time to food delivery correlated with its onset. Control of pigeons' key pecks by onset of the left keylight occurred on more trials per session when the center keylight was a relatively weak conditioned reinforcer and on fewer trials per session when the center keylight was a relatively strong condtioned reinforcer. Differences across conditions in the degree of control by onset of the establishing stimulus were greatest when changes in conditioned reinforcer strength occurred relatively frequently and were signaled. The results provide evidence of the function of an establishing stimulus.
Collapse
|
15
|
Stockhorst U. Effects of different accessibility of reinforcement schedules on choice in humans. J Exp Anal Behav 2010; 62:269-92. [PMID: 16812743 PMCID: PMC1334462 DOI: 10.1901/jeab.1994.62-269] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Based on the delay-reduction hypothesis, a less profitable schedule should be rejected if its duration exceeds the mean delay to reinforcement. It should be accepted if its duration is shorter than the mean delay. This was tested for humans, using a successive-choice schedule. The accessibility of the less profitable (variable-interval 18 s) schedule was varied by changing the duration (in terms of a fixed interval) of the waiting-time component preceding its presentation. Forty-eight students were randomly assigned to three groups. In Phase 1, the duration of the less profitable schedule equaled the mean delay to reinforcement in all groups. In Phase 2, waiting time preceding the less profitable schedule was reduced in Group 1 and increased in Group 2. Thus, the schedule was correlated either with a relative delay increase (Group 1) or a delay reduction (Group 2). In Group 3, conditions remained unchanged. As predicted, acceptance of the less profitable schedule decreased in Group 1 and increased in Group 2. The increased acceptance in Group 2 was accompanied by a decreased acceptance of the more profitable (variable-interval 3 s) schedule, resembling a pattern of negative contrast. Response rates were higher under the component preceding (a) the more profitable schedule in Group 1 and (b) the less profitable schedule in Group 2. Implications for the modification of human choice behavior are discussed.
Collapse
|
16
|
Perone M, Kaminski BJ. Conditioned reinforcement of human observing behavior by descriptive and arbitrary verbal stimuli. J Exp Anal Behav 2010; 58:557-75. [PMID: 16812679 PMCID: PMC1322102 DOI: 10.1901/jeab.1992.58-557] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
College students earned monetary reinforcers by pressing a key according to a compound schedule with variable-interval and extinction components. Pressing additional keys occasionally produced displays of either of two verbal stimuli; one was uncorrelated with the schedule components, and the other was correlated with the extinction component. In Experiments 1 and 2, the display area of the apparatus was blank unless an observing key was pressed, whereupon a descriptive message appeared. Most students preferred an uncorrelated stimulus stating that "Some of this time scores are TWICE AS LIKELY as normal, and some of this time NO SCORES can be earned" over a stimulus stating that "At this time NO SCORES can be earned." In Experiment 3, the display area indicated that "The Current Status of the Program is: NOT SHOWN." Presses on the observing keys replaced this message with stimuli that provided arbitrary labels for the schedule conditions. All of the students preferred a stimulus stating that "The Current Status of the Program is: B" over an uncorrelated stimulus stating that "The Current Status of the Program is: either A or B." Thus, under some circumstances, observing was maintained by a stimulus correlated with extinction-a finding that poses a challenge for Pavolvian accounts of conditioned reinforcement. Differences in the maintenance of observing by the descriptive and arbitrary stimuli may be attributed to differences in either the strength or nature of the instructional control exerted by the verbal stimuli.
Collapse
|
17
|
Abstract
Six pigeons responded in fifty-six conditions on a concurrent-chains procedure. Conditions included several with equal initial links and unequal terminal links, several with unequal initial links and equal terminal links, and several with both unequal initial and terminal links. Although the delay-reduction hypothesis accounted well for choice when the initial links were equal (mean deviation of .04), it fit the data poorly when the initial links were unequal (mean deviation of .18). A modification of the delay-reduction hypothesis, replacing the rates of reinforcement with the square roots of these rates, fit the data better than either the unmodified delay-reduction equation or Killeen's (1982) model. The modified delay-reduction equation was also consistent with data from prior studies using concurrent chains. The absolute rates of responding in each terminal link were well described by the same hyperbola (Herrnstein, 1970) that describes response rates on simple interval schedules.
Collapse
|
18
|
Fantino E, Case DA. Human observing: Maintained by stimuli correlated with reinforcement but not extinction. J Exp Anal Behav 2010; 40:193-210. [PMID: 16812343 PMCID: PMC1347908 DOI: 10.1901/jeab.1983.40-193] [Citation(s) in RCA: 81] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
College students received points exchangeable for money (reinforcement) on a variable-time 60-second schedule that alternated randomly with an extinction component. Subjects were informed that responding would not influence either the rate or distribution of reinforcement. Instead, presses on either of two levers ("observing responses") produced stimuli. In each of four experiments, stimuli positively correlated with reinforcement and/or stimuli uncorrelated with reinforcement were each chosen over stimuli correlated with extinction. These results are consistent with prior results from pigeons in supporting the conditioned-reinforcement hypothesis of observing and in not supporting the uncertainty-reduction hypothesis.
Collapse
|
19
|
Abstract
Six pigeons responded under concurrent-chains schedules. For 3 birds, pecking was required in both initial links; for 3 others, treadle pressing was required. For all subjects, pecking was required in one terminal link and treadling in the other. The initial links consisted of independent variable-interval 60-s schedules. All birds were exposed to five pairs of terminal-link variable-interval schedules over 10 conditions: 6 s versus 54 s, 18 s versus 42 s, 30 s versus 30 s, 42 s versus 18 s, and 54 s versus 6 s. Comparisons of responding under nominally identical terminal-link variable-interval schedules showed that, without exception, higher choice proportions were obtained for the alternative correlated with terminal-link pecking. Moreover, terminal-link delay to reinforcement was shorter for terminal-link pecking than for terminal-link treadling chains. This factor, along with response force requirements, was implicated in explaining the present as well as previous findings of preference for pecking over treadling. It was found also that the delay-reduction hypothesis provided only a moderately accurate description of performance under concurrent chains in which different terminal-link response topographies are required. These findings suggest that quantitative models neglecting the effects of differing terminal-link topographies may be incomplete.
Collapse
|
20
|
Abstract
An extension of the generalized matching law incorporating context effects on terminal-link sensitivity is proposed as a quantitative model of behavior under concurrent chains. The contextual choice model makes many of the same qualitative predictions as the delay-reduction hypothesis, and assumes that the crucial contextual variable in concurrent chains is the ratio of average times spent, per reinforcement, in the terminal and initial links; this ratio controls differential effectiveness of terminal-link stimuli as conditioned reinforcers. Ninety-two concurrent-chains data sets from 19 published studies were fitted to the model. Averaged across all studies, the model accounted for 90% of the variance in pigeons' relative initial-link responding. The model therefore demonstrates that a matching law analysis of concurrent chains-the assumption that relative initial-link responding equals relative terminal-link value-remains quantitatively viable. Because the model reduces to the generalized matching law when terminal-link duration is zero, it provides a quantitative integration of concurrent schedules and concurrent chains.
Collapse
|
21
|
McDevitt M, Spetch M, Dunn R. Contiguity and conditioned reinforcement in probabilistic choice. J Exp Anal Behav 2010; 68:317-27. [PMID: 16812865 PMCID: PMC1284641 DOI: 10.1901/jeab.1997.68-317] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
22
|
Lalli JS, Mauro BC. The paradox of preference for unreliable reinforcement: The role of context and conditioned reinforcement. J Appl Behav Anal 2010; 28:389-94. [PMID: 16795870 PMCID: PMC1279841 DOI: 10.1901/jaba.1995.28-389] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
We discuss Belke and Spetch's (1994) work on choice between reliable and unreliable reinforcement. The studies by Belke and Spetch extend a line of basic research demonstrating that under certain experimental conditions in a concurrent chains procedure, pigeons prefer an alternative that produces unreliable reinforcement. The authors describe the variables that influence preference for unreliable reinforcement, including the signaling and the duration of the reinforcement schedules, the context in which the signaling stimuli occur, and the effects of conditioned reinforcement. Hypothetical applied examples that address these variables are provided, and their influence on preference for unreliable reinforcement in humans is discussed. We conclude by suggesting a line of applied research to examine the relationship between these variables and a preference for unreliable reinforcement.
Collapse
|
23
|
Abstract
If the functional relations governing the strength of a conditioned reinforcer correspond to those obtained with other Pavlovian procedures (e.g., Kaplan, 1984), the termination of stimuli appearing early in the interval between successive food deliveries should be reinforcing. During initial training we presented four key colors, followed by food, in a recurrent sequence to each of 6 pigeons. This established a baseline level of autoshaped pecking. In later sessions, we terminated each of these colors or only the first color for a brief period following each peck, replacing the original color with a standard substitute to avoid darkening the key. Pecking decreased in the presence of the last color in the sequence but increased in the presence of the first. In accord with contemporary models of Pavlovian conditioning, these and other data suggest that the behavioral effects of stimuli in a chain may be better understood in terms of what each stimulus predicts, as measured by relative time to the terminal reinforcer, than in the exclusively positive terms of the traditional formulation (Skinner, 1938). The same model may also account for the initial pause under fixed-interval and fixed-ratio schedules of reinforcement.
Collapse
|
24
|
|
25
|
|
26
|
|
27
|
|
28
|
The multiple determinants of observing behavior. Behav Brain Sci 2010. [DOI: 10.1017/s0140525x00018045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
29
|
|
30
|
|
31
|
|
32
|
|
33
|
|
34
|
|
35
|
|
36
|
|
37
|
|
38
|
|
39
|
|
40
|
|
41
|
Secondary reinforcement: Still alive? Behav Brain Sci 2010. [DOI: 10.1017/s0140525x00018033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
42
|
|
43
|
|
44
|
|
45
|
Abstract
AbstractBehaving organisms are continually choosing. Recently the theoretical and empirical study of decision making by behavioral ecologists and experimental psychologists have converged in the area of foraging, particularly food acquisition. This convergence has raised the interdisciplinary question of whether principles that have emerged from the study of decision making in the operant conditioning laboratory are consistent with decision making in naturally occurring foraging. One such principle, the “parameter-free delay-reduction hypothesis, ” developed in studies of choice in the operant conditioning laboratory, states that the effectiveness of a stimulus as a reinforcer may be predicted most accurately by calculating the decrease in time to food presentation correlated with the onset of the stimulus, relative to the length of time to food presentation measured from the onset of the preceding stimulus. Since foraging involves choice, the delay-reduction hypothesis may be extended to predict aspects of foraging. We discuss the strategy of assessing parameters of foraging with operant laboratory analogues to foraging. We then compare the predictions of the delay-reduction hypothesis with those of optimal foraging theory, developed by behavioral ecologists, showing that, with two exceptions, the two positions make comparable predictions. The delay-reduction hypothesis is also compared to several contemporary pscyhological accounts of choice. Results from several of our experiments with pigeons, designed as operant conditioning simulations of foraging, have shown the following: The more time subjects spend searching for or traveling between potential food sources, the less selective they become, that is, the more likely they are to accept the less preferred outcome; increasing time spent procuring (“handling”) food increases selectivity; how often the preferred outcome is available has a greater effect on choice then how often the less preferred outcome is available; subjects maximize reinforcement whether it is the rate, amount, or probability of reinforcement that is varied; there are no significant differences between subjects performing under different types of deprivation (open vs. closed economies). These results are all consistent with the delay-reduction hypothesis. Moreover, they suggest that the technology of the operant conditioning laboratory may have fruitful application in the study of foraging, and, in doing so, they underscore the importance of an interdisciplinary approach to behavior.
Collapse
|
46
|
|
47
|
|
48
|
|
49
|
|
50
|
|