1
|
Manolov R, Lebrault H, Krasny-Pacini A. How to assess and take into account trend in single-case experimental design data. Neuropsychol Rehabil 2024; 34:388-429. [PMID: 36961228 DOI: 10.1080/09602011.2023.2190129] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 03/07/2023] [Indexed: 03/25/2023]
Abstract
One of the data features that are expected to be assessed when analyzing single-case experimental designs (SCED) data is trend. The current text deals with four different questions that applied researchers can ask themselves when assessing trend and especially when dealing with improving baseline trend: (a) What options exist for assessing the presence of trend?; (b) Once assessed, what criterion can be followed for deciding whether it is necessary to control for baseline trend?; (c) What strategy can be followed for controlling for baseline trend?; and (d) How to proceed in case there is baseline trend only in some A-B comparisons? Several options are reviewed for each of these questions in the context of real data, and tentative recommendations are provided. A new user-friendly website is developed to implement the options for fitting a trend line and a criterion for selecting a specific technique for that purpose. Trend-related and more general data analytical recommendations are provided for applied researchers.Trial registration: ClinicalTrials.gov identifier: NCT04560777.
Collapse
Affiliation(s)
- Rumen Manolov
- Department of Social Psychology and Quantitative Psychology, Faculty of Psychology Barcelona, Spain
| | - Hélène Lebrault
- Rehabilitation department for children with congenital neurological injury, Saint Maurice Hospitals Saint Maurice, France
- Sorbonne Université, Laboratoire d'Imagerie Biomédicale, LIB Paris, France
- GRC 24, Handicap Moteur et Cognitif et Réadaptation (HaMCRe); Sorbonne Université Paris, France
| | - Agata Krasny-Pacini
- Pôle de Médecine Physique et de Réadaptation, Institut Universitaire de réadaptation Clemenceau StrasbourgHôpitaux Universitaires de Strasbourg, UF 4372, Strasbourg, France
- Unité INSERM 1114 Neuropsychologie Cognitive et Physiopathologie De La Schizophrénie, Département de Psychiatrie, Hôpital Civil de Strasbourg, Strasbourg, France
- Université de Strasbourg, Faculté de Médecine Strasbourg
| |
Collapse
|
2
|
Chen LT, Chen YK, Yang TR, Chiang YS, Hsieh CY, Cheng C, Ding QW, Wu PJ, Peng CYJ. Examining the normality assumption of a design-comparable effect size in single-case designs. Behav Res Methods 2024; 56:379-405. [PMID: 36650402 DOI: 10.3758/s13428-022-02035-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/22/2022] [Indexed: 01/18/2023]
Abstract
What Works Clearinghouse (WWC, 2022) recommends a design-comparable effect size (D-CES; i.e., gAB) to gauge an intervention in single-case experimental design (SCED) studies, or to synthesize findings in meta-analysis. So far, no research has examined gAB's performance under non-normal distributions. This study expanded Pustejovsky et al. (2014) to investigate the impact of data distributions, number of cases (m), number of measurements (N), within-case reliability or intra-class correlation (ρ), ratio of variance components (λ), and autocorrelation (ϕ) on gAB in multiple-baseline (MB) design. The performance of gAB was assessed by relative bias (RB), relative bias of variance (RBV), MSE, and coverage rate of 95% CIs (CR). Findings revealed that gAB was unbiased even under non-normal distributions. gAB's variance was generally overestimated, and its 95% CI was over-covered, especially when distributions were normal or nearly normal combined with small m and N. Large imprecision of gAB occurred when m was small and ρ was large. According to the ANOVA results, data distributions contributed to approximately 49% of variance in RB and 25% of variance in both RBV and CR. m and ρ each contributed to 34% of variance in MSE. We recommend gAB for MB studies and meta-analysis with N ≥ 16 and when either (1) data distributions are normal or nearly normal, m = 6, and ρ = 0.6 or 0.8, or (2) data distributions are mildly or moderately non-normal, m ≥ 4, and ρ = 0.2, 0.4, or 0.6. The paper concludes with a discussion of gAB's applicability and design-comparability, and sound reporting practices of ES indices.
Collapse
Affiliation(s)
- Li-Ting Chen
- Department of Educational Studies, University of Nevada, Reno, Reno, NV, USA.
| | - Yi-Kai Chen
- Department of Psychology, National Taiwan University, Taipei, Taiwan
| | - Tong-Rong Yang
- Department of Psychology, National Taiwan University, Taipei, Taiwan
| | - Yu-Shan Chiang
- Department of Curriculum & Instruction, Indiana University Bloomington, Bloomington, IN, USA
| | - Cheng-Yu Hsieh
- Department of Psychology, National Taiwan University, Taipei, Taiwan
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| | - Che Cheng
- Department of Psychology, National Taiwan University, Taipei, Taiwan
| | - Qi-Wen Ding
- Institute of Sociology, Academia Sinica, Taipei, Taiwan
| | - Po-Ju Wu
- Department of Counseling and Educational Psychology, Indiana University Bloomington, Bloomington, IN, USA
| | - Chao-Ying Joanne Peng
- Department of Psychology, National Taiwan University, Taipei, Taiwan
- Department of Counseling and Educational Psychology, Indiana University Bloomington, Bloomington, IN, USA
| |
Collapse
|
3
|
Manolov R. Does the choice of a linear trend-assessment technique matter in the context of single-case data? Behav Res Methods 2023; 55:4200-4221. [PMID: 36622560 DOI: 10.3758/s13428-022-02013-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/22/2022] [Indexed: 01/10/2023]
Abstract
Trend is one of the data aspects that is an object of assessment in the context of single-case experimental designs. This assessment can be performed both visually and quantitatively. Given that trend, just like other relevant data features such as level, immediacy, or overlap does not have a single operative definition, a comparison among the existing alternatives is necessary. Previous studies have included illustrations of differences between trend-line fitting techniques using real data. In the current study, I carry out a simulation to study the degree to which different trend-line fitting techniques lead to different degrees of bias, mean square error, and statistical power for a variety of quantifications that entail trend lines. The simulation involves generating both continuous and count data, for several phase lengths, degrees of autocorrelation, and effect sizes (change in level and change in slope). The results suggest that, in general, ordinary least squares estimation performs well in terms of relative bias and mean square error. Especially, a quantification of slope change is associated with better statistical results than quantifying an average difference between conditions on the basis of a projected baseline trend. In contrast, the performance of the split-middle (bisplit) technique is less than optimal.
Collapse
Affiliation(s)
- Rumen Manolov
- Department of Social Psychology and Quantitative Psychology, Faculty of Psychology, University of Barcelona, Passeig de la Vall d'Hebron 171, 08035, Barcelona, Spain.
| |
Collapse
|
4
|
Manolov R, Onghena P. Defining and assessing immediacy in single-case experimental designs. J Exp Anal Behav 2022; 118:462-492. [PMID: 36106573 PMCID: PMC9825864 DOI: 10.1002/jeab.799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 08/24/2022] [Accepted: 08/28/2022] [Indexed: 01/11/2023]
Abstract
Immediacy is one of six data aspects (alongside level, trend, variability, overlap, and consistency) that has to be accounted for when visually analyzing single-case data. Given that it is one of the aspects that has received considerably less attention than other data aspects, the current text offers a review of the proposed conceptual definitions of immediacy (i.e., what it refers to) and also of the suggested operational definitions (i.e., how exactly is it assessed and/or quantified). Provided that a variety of conceptual and operational definitions is identified, we propose following a sensitivity analysis using a randomization test for assessing immediate effects in single-case experimental designs, by identifying when changes were most clear. In such a sensitivity analysis, the immediate effects are tested for multiple possible intervention points and for different possible operational definitions. Robust immediate effects can be detected if the results for the different operational definitions converge.
Collapse
Affiliation(s)
- Rumen Manolov
- Department of Social Psychology and Quantitative Psychology, Faculty of PsychologyUniversity of Barcelona
| | - Patrick Onghena
- Faculty of Psychology and Educational Sciences, Methodology of Educational Sciences Research GroupKU Leuven – University of LeuvenLeuvenBelgium
| |
Collapse
|
5
|
Tanious R, Manolov R. Violin plots as visual tools in the meta-analysis of Single-Case Experimental Designs. METHODOLOGY-EUROPEAN JOURNAL OF RESEARCH METHODS FOR THE BEHAVIORAL AND SOCIAL SCIENCES 2022. [DOI: 10.5964/meth.9209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Despite the existence of sophisticated statistical methods, systematic reviews regularly indicate that single-case experimental designs (SCEDs) are predominantly analyzed through visual tools. For the quantitative aggregation of results, different meta-analytical techniques are available, but specific visual tools for the meta-analysis of SCEDs are lacking. The present article therefore describes the use of violin plots as visual tools to represent the raw data. We first describe the underlying rationale of violin plots and their main characteristics. We then show how the violin plots can complement the statistics obtained in a quantitative meta-analysis. The main advantages of violin plots as visual tools in meta-analysis are (a) that they preserve information about the raw data from each study, (b) that they have the ability to visually represent data from different designs in one graph, and (c) that they enable the comparison of score distributions from different experimental phases from different studies.
Collapse
|
6
|
Manolov R, Tanious R, Fernández-Castilla B. A proposal for the assessment of replication of effects in single-case experimental designs. J Appl Behav Anal 2022; 55:997-1024. [PMID: 35467023 PMCID: PMC9324994 DOI: 10.1002/jaba.923] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 03/15/2022] [Accepted: 03/18/2022] [Indexed: 11/23/2022]
Abstract
In science in general and in the context of single‐case experimental designs, replication of the effects of the intervention within and/or across participants or experiments is crucial for establishing causality and for assessing the generality of the intervention effect. Specific developments and proposals for assessing whether an effect has been replicated or not (or to what extent) are scarce, in the general context of behavioral sciences, and practically null in the single‐case experimental designs context. We propose an extension of the modified Brinley plot for assessing how many of the effects replicate. To make this assessment possible, a definition of replication is suggested, on the basis of expert judgment, rather than on statistical criteria. The definition of replication and its graphical representation are justified, presenting their strengths and limitations, and illustrated with real data. A user‐friendly software is made available for obtaining automatically the graphical representation.
Collapse
Affiliation(s)
- Rumen Manolov
- Department of Social Psychology and Quantitative Psychology, University of Barcelona
| | - René Tanious
- Psychology and Educational Sciences, Methodology of Educational Sciences Research Group, KU Leuven - University of Leuven, Leuven, Belgium
| | - Belén Fernández-Castilla
- Psychology and Educational Sciences, Methodology of Educational Sciences Research Group, KU Leuven - University of Leuven, Leuven, Belgium
| |
Collapse
|
7
|
Bidhendi Yarandi R, Mansournia MA, Zeraati H, Mohammad K. An intuitive framework for Bayesian posterior simulation methods. GLOBAL EPIDEMIOLOGY 2021; 3:100060. [PMID: 37635729 PMCID: PMC10445998 DOI: 10.1016/j.gloepi.2021.100060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Revised: 08/10/2021] [Accepted: 08/10/2021] [Indexed: 10/20/2022] Open
Abstract
Purpose Bayesian inference has become popular. It offers several pragmatic approaches to account for uncertainty in inference decision-making. Various estimation methods have been introduced to implement Bayesian methods. Although these algorithms are powerful, they are not always easy to grasp for non-statisticians. This paper aims to provide an intuitive framework of four essential Bayesian computational methods for epidemiologists and other health researchers. We do not cover an extensive mathematical discussion of these approaches, but instead offer a non-quantitative description of these algorithms and provide some illuminating examples. Materials and methods Bayesian computational methods, namely importance sampling, rejection sampling, Markov chain Monte Carlo, and data augmentation are presented. Results and conclusions The substantial amount of research published on Bayesian inference has highlighted its popularity among researchers, while the basic concepts are not always straightforward for interested learners. We show that alternative approaches such as a weighted prior approach, which are intuitively appealing and easy-to-understand, work well in the case of low-dimensional problems and appropriate prior information. Otherwise, MCMC is a trouble-free tool in those cases.
Collapse
Affiliation(s)
- Razieh Bidhendi Yarandi
- Department of Biostatistics, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Mohammad Ali Mansournia
- Department of Epidemiology and Biostatistics, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran
| | - Hojjat Zeraati
- Department of Epidemiology and Biostatistics, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran
| | - Kazem Mohammad
- Department of Epidemiology and Biostatistics, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
8
|
Accurate models vs. accurate estimates: A simulation study of Bayesian single-case experimental designs. Behav Res Methods 2021; 53:1782-1798. [PMID: 33575987 PMCID: PMC8367899 DOI: 10.3758/s13428-020-01522-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/11/2020] [Indexed: 11/08/2022]
Abstract
Although statistical practices to evaluate intervention effects in single-case experimental design (SCEDs) have gained prominence in recent times, models are yet to incorporate and investigate all their analytic complexities. Most of these statistical models incorporate slopes and autocorrelations, both of which contribute to trend in the data. The question that arises is whether in SCED data that show trend, there is indeterminacy between estimating slope and autocorrelation, because both contribute to trend, and the data have a limited number of observations. Using Monte Carlo simulation, we compared the performance of four Bayesian change-point models: (a) intercepts only (IO), (b) slopes but no autocorrelations (SI), (c) autocorrelations but no slopes (NS), and (d) both autocorrelations and slopes (SA). Weakly informative priors were used to remain agnostic about the parameters. Coverage rates showed that for the SA model, either the slope effect size or the autocorrelation credible interval almost always erroneously contained 0, and the type II errors were prohibitively large. Considering the 0-coverage and coverage rates of slope effect size, intercept effect size, mean relative bias, and second-phase intercept relative bias, the SI model outperformed all other models. Therefore, it is recommended that researchers favor the SI model over the other three models. Research studies that develop slope effect sizes for SCEDs should consider the performance of the statistic by taking into account coverage and 0-coverage rates. These helped uncover patterns that were not realized in other simulation studies. We underline the need for investigating the use of informative priors in SCEDs.
Collapse
|
9
|
Natesan Batley P, Nandakumar R, Palka JM, Shrestha P. Comparing the Bayesian Unknown Change-Point Model and Simulation Modeling Analysis to Analyze Single Case Experimental Designs. Front Psychol 2021; 11:617047. [PMID: 33519641 PMCID: PMC7843386 DOI: 10.3389/fpsyg.2020.617047] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Accepted: 12/18/2020] [Indexed: 11/13/2022] Open
Abstract
Recently, there has been an increased interest in developing statistical methodologies for analyzing single case experimental design (SCED) data to supplement visual analysis. Some of these are simulation-driven such as Bayesian methods because Bayesian methods can compensate for small sample sizes, which is a main challenge of SCEDs. Two simulation-driven approaches: Bayesian unknown change-point model (BUCP) and simulation modeling analysis (SMA) were compared in the present study for three real datasets that exhibit "clear" immediacy, "unclear" immediacy, and delayed effects. Although SMA estimates can be used to answer some aspects of functional relationship between the independent and the outcome variables, they cannot address immediacy or provide an effect size estimate that considers autocorrelation as required by the What Works Clearinghouse (WWC) Standards. BUCP overcomes these drawbacks of SMA. In final analysis, it is recommended that both visual and statistical analyses be conducted for a thorough analysis of SCEDs.
Collapse
Affiliation(s)
| | - Ratna Nandakumar
- School of Education, University of Delaware, Newark, DE, United States
| | - Jayme M. Palka
- Department of Educational Psychology, University of North Texas, Denton, TX, United States
| | - Pragya Shrestha
- School of Education, University of Delaware, Newark, DE, United States
| |
Collapse
|
10
|
Manolov R, Tanious R. Assessing Consistency in Single-Case Data Features Using Modified Brinley Plots. Behav Modif 2020; 46:581-627. [PMID: 33371723 DOI: 10.1177/0145445520982969] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The current text deals with the assessment of consistency of data features from experimentally similar phases and consistency of effects in single-case experimental designs. Although consistency is frequently mentioned as a critical feature, few quantifications have been proposed so far: namely, under the acronyms CONDAP (consistency of data patterns in similar phases) and CONEFF (consistency of effects). Whereas CONDAP allows assessing the consistency of data patterns, the proposals made here focus on the consistency of data features such as level, trend, and variability, as represented by summary measures (mean, ordinary least squares slope, and standard deviation, respectively). The assessment of consistency of effect is also made in terms of these three data features, while also including the study of the consistency of an immediate effect (if expected). The summary measures are represented as points on a modified Brinley plot and their similarity is assessed via quantifications of distance. Both absolute and relative measures of consistency are proposed: the former expressed in the same measurement units as the outcome variable and the latter as a percentage. Illustrations with real data sets (multiple baseline, ABAB, and alternating treatments designs) show the wide applicability of the proposals. We developed a user-friendly website to offer both the graphical representations and the quantifications.
Collapse
|
11
|
Natesan Batley P, Contractor AA, Caldas SV. Bayesian Time-Series Models in Single Case Experimental Designs: A Tutorial for Trauma Researchers. J Trauma Stress 2020; 33:1144-1153. [PMID: 33205545 PMCID: PMC8246830 DOI: 10.1002/jts.22614] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/02/2020] [Revised: 09/25/2020] [Accepted: 09/26/2020] [Indexed: 01/15/2023]
Abstract
Single-case experimental designs (SCEDs) involve obtaining repeated measures from one or a few participants before, during, and, sometimes, after treatment implementation. Because they are cost-, time-, and resource-efficient and can provide robust causal evidence for more large-scale research, SCEDs are gaining popularity in trauma treatment research. However, sophisticated techniques to analyze SCED data remain underutilized. Herein, we discuss the utility of SCED data for trauma research, provide recommendations for addressing challenges specific to SCED approaches, and introduce a tutorial for two Bayesian models-the Bayesian interrupted time-series (BITS) model and the Bayesian unknown change-point (BUCP) model-that can be used to analyze the typically small sample, autocorrelated, SCED data. Software codes are provided for the ease of guiding readers in estimating these models. Analyses of a dataset from a published article as well as a trauma-specific simulated dataset are used to illustrate the models and demonstrate the interpretation of the results. We further discuss the implications of using such small-sample data-analytic techniques for SCEDs specific to trauma research.
Collapse
|