1
|
Camarena HO, García-Leal Ó, Delgadillo-Orozco J, Barrón E. Probabilistic reinforcement precludes transitive inference: A preliminary study. Front Psychol 2023; 14:1111597. [PMID: 37063537 PMCID: PMC10097881 DOI: 10.3389/fpsyg.2023.1111597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 02/23/2023] [Indexed: 04/18/2023] Open
Abstract
In the basic verbal task from Piaget, when a relation of the form if A > B and B > C is given, a logical inference A > C is expected. This process is called transitive inference (TI). The adapted version for animals involves the presentation of a simultaneous discrimination between stimuli pairs. In this way, when A+B-, B+C-, C+D-, D+E- is trained, a B>D preference is expected, assuming that if A>B>C>D>E, then B>D. This effect has been widely reported using several procedures and different species. In the current experiment TI was evaluated employing probabilistic reinforcement. Thus, for the positive stimuli a .7 probability was administered and for the negative stimuli a .3 probability was administered. Under this arrangement the relation A>B>C>D>E is still allowed, but TI becomes more difficult. Five pigeons (Columba Livia) were exposed to the mentioned arrangement. Only one pigeon reached the criterion in C+D- discrimination, whereas the remaining did not. Only the one who successfully solved C+D- was capable of learning TI, whereas the others were not. Additionally, it was found that correct response ratios did not predict BD performance. Consequently, probabilistic reinforcement disrupted TI, but some positional ordering was retained in the test. The results suggest that TI might be affected by associative strength but also by the positional ordering of the stimuli. The discussion addresses the two main accounts of TI: the associative account and the ordinal representation account.
Collapse
Affiliation(s)
- Héctor O. Camarena
- Centro de Estudios e Investigaciones en Comportamiento, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
- *Correspondence: Héctor O. Camarena
| | - Óscar García-Leal
- Department of Environmental Sciences, University of Guadalajara, Guadalajara, Mexico
- School of Doctoral Studies and Research, Universidad Europea de Madrid, Madrid, Spain
| | - Julieta Delgadillo-Orozco
- Centro de Estudios e Investigaciones en Comportamiento, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
| | - Erick Barrón
- Basic Psychology Department, University of Guadalajara, Guadalajara, Mexico
| |
Collapse
|
2
|
Assessing human performance during contingency changes and extinction tests in reversal-learning tasks. Learn Behav 2022; 50:494-508. [PMID: 35112316 DOI: 10.3758/s13420-022-00513-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/16/2022] [Indexed: 12/30/2022]
Abstract
Serial reversal-learning procedures are simple preparations that allow for a better understanding of how animals learn about environmental changes, including flexibly shifting responding to adapt to changing reinforcement contingencies. The present study examined serial reversal learning with humans by arranging both midsession and variable contingency reversals across two experiments. We also examined the effects of extinction by adding nonreinforced trials at the end of later sessions and provided the first evaluation of effects of win-stay/lose-shift versus counting strategies on accuracy and response latency of humans' reversal-learning performance. In each experiment, responding tracked contingency reversals, primarily with participants using either win-stay/lose-shift or counting strategies. Introducing variable reversal points in the second experiment resulted in near-exclusive win-stay/lose-shift responding among participants and eliminated counting of trials. Each experiment also revealed an immediate shift from S2 to S1 after experiencing extinction during the initial test trial, indicating resurgence of the initial response through a win-stay/lose-shift response pattern. Therefore, the present study replicates and extends prior findings of a win-stay/lose shift response pattern in situations of greater uncertainty. These findings suggest that differences in environmental certainty induce qualitatively different decision-making strategies.
Collapse
|
3
|
Machado A, Vasconcelos M. Dissolving the molar-molecular controversy. J Exp Anal Behav 2021; 115:596-603. [PMID: 33497470 DOI: 10.1002/jeab.675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Revised: 11/26/2020] [Accepted: 12/09/2020] [Indexed: 11/07/2022]
|
4
|
Iyer ES, Kairiss MA, Liu A, Otto AR, Bagot RC. Probing relationships between reinforcement learning and simple behavioral strategies to understand probabilistic reward learning. J Neurosci Methods 2020; 341:108777. [PMID: 32417532 DOI: 10.1016/j.jneumeth.2020.108777] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Revised: 04/22/2020] [Accepted: 05/11/2020] [Indexed: 11/18/2022]
Abstract
BACKGROUND Reinforcement learning (RL) and win stay/lose shift model accounts of decision making are both widely used to describe how individuals learn about and interact with rewarding environments. Though mutually informative, these accounts are often conceptualized as independent processes and so the potential relationships between win stay/lose shift tendencies and RL parameters have not been explored. NEW METHOD We introduce a methodology to directly relate RL parameters to behavioral strategy. Specifically, by calculating a truncated multivariate normal distribution of RL parameters given win stay/lose shift tendencies from simulating these tendencies across the parameter space, we maximize the normal distribution for a given set of win stay/lose shift tendencies to approximate reinforcement learning parameters. RESULTS We demonstrate novel relationships between win stay/lose shift tendencies and RL parameters that challenge conventional interpretations of lose shift as a metric of loss sensitivity. Further, we demonstrate in both simulated and empirical data that this method of parameter approximation yields reliable parameter recovery. COMPARISON WITH EXISTING METHOD We compare this method against the conventionally used maximum likelihood estimation method for parameter approximation in simulated noisy and empirical data. For simulated noisy data, we show that this method performs similarly to maximum likelihood estimation. For empirical data, however, this method provides a more reliable approximation of reinforcement learning parameters than maximum likelihood estimation. CONCLUSIONS We demonstrate the existence of relationships between win stay/lose shift tendencies and RL parameters and introduce a method that leverages these relationships to enable recovery of RL parameters exclusively from win stay/lose shift tendencies.
Collapse
Affiliation(s)
- Eshaan S Iyer
- Integrated Program in Neuroscience, McGill University, 3801 Rue University, Montréal, QC H3A 2B4, Canada
| | - Megan A Kairiss
- Department of Psychology, McGill University, 1205 Ave Dr. Penfield, Montréal, QC H3A 1B1, Canada
| | - Adrian Liu
- Department of Physics, McGill University, 3600 Rue University, Montréal, QC H3A 2T8, Canada
| | - A Ross Otto
- Department of Psychology, McGill University, 1205 Ave Dr. Penfield, Montréal, QC H3A 1B1, Canada
| | - Rosemary C Bagot
- Department of Psychology, McGill University, 1205 Ave Dr. Penfield, Montréal, QC H3A 1B1, Canada; Ludmer Centre for Neuroinformatics and Mental Health, 3661 Rue University, Montréal, QC H3A 2B3, Canada.
| |
Collapse
|
5
|
Gomes-Ng S, Landon J, Elliffe D, Bensemann J, Cowie S. The effects of changeover delays on local choice. Behav Processes 2018; 150:36-46. [DOI: 10.1016/j.beproc.2018.02.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2017] [Revised: 12/06/2017] [Accepted: 02/26/2018] [Indexed: 10/17/2022]
|
6
|
Midsession reversals with pigeons: visual versus spatial discriminations and the intertrial interval. Learn Behav 2016; 42:40-6. [PMID: 24043581 DOI: 10.3758/s13420-013-0122-x] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Discrimination reversal learning has been used as a measure of species flexibility in dealing with changes in reinforcement contingency. In the simultaneous-discrimination, midsession-reversal task, one stimulus (S1) is correct for the first half of the session, and the other stimulus (S2) is correct for the second half. After training, pigeons show a curious pattern of choices: They begin to respond to S2 well before the reversal point (i.e., they make anticipatory errors), and they continue to respond to S1 well after the reversal (i.e., they make perseverative errors). That is, pigeons appear to be using the passage of time or the number of trials into the session as a cue to reverse, and are less sensitive to the feedback at the point of reversal. To determine whether the nature of the discrimination or a failure of memory for the stimulus chosen on the preceding trial contributed to the pigeons' less-than-optimal performance, we manipulated the nature of the discrimination (spatial or visual) and the duration of the intertrial interval (5.0 or 1.5 s), in order to determine the conditions under which pigeons would show efficient reversal learning. The major finding was that only when the discrimination was spatial and the intertrial interval was short did the pigeons perform optimally.
Collapse
|
7
|
Abstract
We studied behavioral flexibility, or the ability to modify one's behavior in accordance with the changing environment, in pigeons using a reversal-learning paradigm. In two experiments, each session consisted of a series of five-trial sequences involving a simple simultaneous color discrimination in which a reversal could occur during each sequence. The ideal strategy would be to start each sequence with a choice of S1 (the first correct stimulus) until it was no longer correct, and then to switch to S2 (the second correct stimulus), thus utilizing cues provided by local reinforcement (feedback from the preceding trial). In both experiments, subjects showed little evidence of using local reinforcement cues, but instead used the mean probabilities of reinforcement for S1 and S2 on each trial within each sequence. That is, subjects showed remarkably similar behavior, regardless of where (or, in Exp. 2, whether) a reversal occurred during a given sequence. Therefore, subjects appeared to be relatively insensitive to the consequences of responses (local feedback) and were not able to maximize reinforcement. The fact that pigeons did not use the more optimal feedback afforded by recent reinforcement contingencies to maximize their reinforcement has implications for their use of flexible response strategies under reversal-learning conditions.
Collapse
|
8
|
Abstract
Eighteen pigeons served in a discrete-trials short-term memory experiment in which the reinforcement probability for a peck on one of two keys depended on the response reinforced on the previous trial: either the probability of reinforcement on a trial was 0.8 for the same response reinforced on the previous trial and was 0.2 for the other response (Group A), or, it was 0 or 0.2 for the same response and 1.0 or 0.8 for the other response (Group B). A correction procedure ensured that over all trials reinforcement was distributed equally across the left and right keys. The optimal strategy was either a winstay, lose-shift strategy (Group A) or a win-shift, lose-stay strategy (Group B). The retention interval, that is the intertrial interval, was varied. The average probability of choosing the optimal alternative reinforced 80% of the time was 0.96, 0.84, and 0.74 after delays of 2.5, 4.0, and 6.0 sec, respectively for Group A, and was 0.87, 0.81, and 0.55 after delays of 2.5, 4.0, and 6.0 sec, respectively, for Group B. This outcome is consistent with the view that behavior approximated the optimal response strategy but only to an extent permitted by a subject's short-term memory for the cue correlated with reinforcement, that is, its own most-recently reinforced response. More generally, this result is consistent with "molecular" analyses of operant behavior, but is inconsistent with traditional "molar" analyses holding that fundamental controlling relations may be discovered by routinely averaging over different local reinforcement contingencies. In the present experiment, the molar results were byproducts of local reinforcement contingencies involving an organism's own recent behavior.
Collapse
|
9
|
Abstract
Rats were exposed to a random sequence of reinforcement on two levers, such that there was no way to predict from the previous reinforcement which lever would deliver reinforcement next. The rats showed a tendency to repeat the choice that had just produced reinforcement, despite the absence of an overall contingency that differentially reinforced such repetition. However, this tendency decreased with continued exposure to the schedule. Runs of successive reinforcements on a lever increased the probability of pressing that lever, but only slightly, and only in the earlier phases of training. The more quickly a press was made after reinforcement the more likely it was to be on the lever that had delivered that reinforcement. Repetition of choice followed by reinforcement should be viewed as a naturally occurring behavior in the rat, but not necessarily as a behavior that will continue without differential reinforcement of repetition.
Collapse
|
10
|
|
11
|
|
12
|
|
13
|
|
14
|
Abstract
AbstractEffective conditioning requires a correlation between the experimenter's definition of a response and an organism's, but an animal's perception of its behavior differs from ours. These experiments explore various definitions of the response, using the slopes of learning curves to infer which comes closest to the organism's definition. The resulting exponentially weighted moving average provides a model of memory that is used to ground a quantitative theory of reinforcement. The theory assumes that: incentives excite behavior and focus the excitement on responses that are contemporaneous in memory. The correlation between the organism's memory and the behavior measured by the experimenter is given by coupling coefficients, which are derived for various schedules of reinforcement. The coupling coefficients for simple schedules may be concatenated to predict the effects of complex schedules. The coefficients are inserted into a generic model of arousal and temporal constraint to predict response rates under any scheduling arrangement. The theory posits a response-indexed decay of memory, not a time-indexed one. It requires that incentives displace memory for the responses that occur before them, and may truncate the representation of the response that brings them about. As a contiguity-weighted correlation model, it bridges opposing views of the reinforcement process. By placing the short-term memory of behavior in so central a role, it provides a behavioral account of a key cognitive process.
Collapse
|
15
|
From overt behavior to hypothetical behavior to memory: Inference in the wrong direction. Behav Brain Sci 2010. [DOI: 10.1017/s0140525x00033756] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
16
|
|
17
|
|
18
|
|
19
|
|
20
|
|
21
|
|
22
|
|
23
|
Reinforcement without representation. Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00033690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
24
|
Short-term memory in human operant conditioning. Behav Brain Sci 1994. [DOI: 10.1017/s0140525x0003380x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
25
|
Has learning been shown to be attractor modification within reinforcement modelling? Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00033689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
26
|
A mathematical theory of reinforcement: An unexpected place to find support for analogical memory coding. Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00033847] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
27
|
Integration and specificity of retrieval in a memory-based model of reinforcement. Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00033707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
28
|
|
29
|
|
30
|
What defines a legitimate issue for Skinnerian psychology: Philosophy or technology? Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00033653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
31
|
|
32
|
Awareness and reinforcement. Behav Brain Sci 1994. [DOI: 10.1017/s0140525x0003377x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
33
|
Practical effects of response specification. Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00033781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
34
|
Moving beyond schedules and rate: A new trajectory? Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00033677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
35
|
The return of the reinforcement theorists. Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00033859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
36
|
Memory and the integration of response sequences. Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00033768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
37
|
Abstract
Rats were trained on a discrete-trial probability learning task. In Experiment 1, the molar reinforcement probabilities for the two response alternatives were equal, and the local contingencies of reinforcement differentially reinforced a win-stay, lose-shift response pattern. The win-stay portion was learned substantially more easily and appeared from the outset of training, suggesting that its occurrence did not depend upon discrimination of the local contingencies but rather only upon simple strengthening effects of individual reinforcements. Control by both types of local contingencies decreased with increases in the intertrial interval, although some control remained with intertrial intervals as long as 30 s. In Experiment 2, the local contingencies always favored win-shift and lose-shift response patterns but were asymmetrical for the two responses, causing the molar reinforcement rates for the two responses to differ. Some learning of the alternation pattern occurred with short intertrial intervals, although win-stay behavior occurred for some subjects. The local reinforcement contingencies were discriminated poorly with longer intertrial intervals. In the absence of control by the local contingencies, choice proportion was determined by the molar contingencies, as indicated by high exponent values for the generalized matching law with long intertrial intervals, and lower values with short intertrial intervals. The results show that when molar contingencies of reinforcement and local contingencies are in opposition, both may have independent roles. Control by molar contingencies cannot generally be explained by local contingencies.
Collapse
Affiliation(s)
- B A Williams
- Department of Psychology, UCSD, La Jolla 92093-0109
| |
Collapse
|
38
|
|
39
|
Spetch ML, Treit D. Does effort play a role in the effect of response requirements on delayed matching to sample? J Exp Anal Behav 1986; 45:19-31. [PMID: 3950533 PMCID: PMC1348208 DOI: 10.1901/jeab.1986.45-19] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
The possible role of "effort" in the accuracy of pigeons' performance on a delayed matching-to-sample procedure was investigated by examining the effects of response requirements that accompanied a trial-initiating stimulus and that accompanied a sample stimulus. In the first experiment, the effect of varying the size of a fixed-ratio requirement for responses during an initiating stimulus was compared to that of varying a similar requirement for responses during the sample stimulus. Accuracy increased reliably with increases in the ratio scheduled during the sample stimulus, but was not significantly affected by increases in the ratio scheduled on the key during the initiating stimulus. In another phase of Experiment 1, sample duration was held constant while the ratio requirement was varied during the initiating stimulus. Again, accuracy of matching to sample was not significantly affected by the size of the ratio scheduled during the initiating stimulus. Experiment 2 provided a systematic replication of these results in another group of pigeons and included a more detailed analysis of responding. These results support the view that increases in sample-response requirement facilitate accuracy of delayed matching by increasing the durations of exposure to the sample stimuli, and do not support a role of effort in the sample-response effect. In Experiment 3, the facilitative effect of responses on the sample but not of those on the initiating stimulus was replicated using a simultaneous matching-to-sample procedure. This finding provides further evidence against an interpretation of response-requirement effects that appeals to effort; the finding also suggests that sample exposure might affect initial discrimination of the sample rather than remembering the sample.
Collapse
|
40
|
Discrete-trial probability learning in rats: Effects of local contingencies of reinforcement. ACTA ACUST UNITED AC 1984. [DOI: 10.3758/bf03199978] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
41
|
Abstract
Pigeons acquired a conditional discrimination in an autoshaping procedure in which certain stimulus combinations (form plus color) were followed by food, whereas others were not followed by food. Although the discrimination normally was acquired quickly, it was completely prevented when the color elements of the stimulus compounds were presented during the intertrial intervals preceding the trials in which both stimulus elements were available. This failure of discrimination was then prevented by having the colors serve as houselights rather than being localized on the response key and by pretraining procedures in which the colors were utilized in simpler discriminations. The results suggest that stimulus salience plays a critical role in determining whether conditional discriminations will be acquired, as the effects of all of the different operations could be understood in terms of increasing or decreasing the salience of the color elements, above or below some threshold value.
Collapse
|
42
|
Overton DA. Influence of shaping procedures and schedules of reinforcement on performance in the two-bar drug discrimination task: a methodological report. Psychopharmacology (Berl) 1979; 65:291-8. [PMID: 117502 DOI: 10.1007/bf00492218] [Citation(s) in RCA: 32] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
43
|
Shimp CP, Moffitt M. Short-term memory in the pigeon: delayed-pair-comparison procedures and some results. J Exp Anal Behav 1977; 28:13-25. [PMID: 903742 PMCID: PMC1333610 DOI: 10.1901/jeab.1977.28-13] [Citation(s) in RCA: 41] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
A discrete-trials, delayed-pair-comparison procedure was developed to study visual short-term memory for tilted lines. In four experiments, pigeons' responses on left or right keys were reinforced with food depending on whether a comparison stimulus was or was not the same as a standard stimulus presented earlier in the same trial. In Experimental I, recall was an increasing function of the exposure time of the to-be-remembered stimulus and was a decreasing function of the retention interval. In Experiment II, retroactive interference was investigated: recall was poorer after a retention interval during which was presented either a tilted line or contextual stimuli in the form of the illuminated experimental chamber. In Experiment III, a subject was required to engage, throughout the retention interval, in one or the other of two different behaviors, depending on which of two stimuli a subject was to remember. This mnemonic strategy vastly improved recall after 15- and 20-second retention intervals. In Experiment IV, the opposite end of the performance continuum was studied: by combining the effects of a larger stimulus set and the effects of what presumably was an increased memory load, performance was reduced to approximately chance levels after retention intervals shorter than 1 second.
Collapse
|
44
|
Short-term retention of response outcome as a determinant of serial reversal learning. LEARNING AND MOTIVATION 1976. [DOI: 10.1016/0023-9690(76)90047-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
45
|
|