1
|
Exploring the Effects of Yoga Therapy on Heart Rate Variability and Patient-Reported Outcomes After Cancer Treatment: A Study Protocol. Integr Cancer Ther 2022; 21:15347354221075576. [PMID: 35393867 PMCID: PMC9016564 DOI: 10.1177/15347354221075576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Following cancer treatment, adults commonly report worsened patient-reported outcomes (PROs) such as anxiety, stress, depression, persistent and upsetting cognitive complaints, unrelenting fatigue, and reduced quality of life. Poorer PROs are associated with disrupted autonomic nervous system functioning as measured by heart rate variability (HRV), both of which have been associated with greater morbidity and mortality. Interventions to improve HRV and PROs among adults following cancer treatment are needed. Yoga therapy holds promise as an intervention to improve HRV and PROs. Therefore, we conducted a single-subject exploratory experimental study to investigate the effects of yoga therapy on HRV and specific PROs (ie, cancer-related fatigue, anxiety, cognitive function, depression, stress, quality of life) in adults treated for cancer. To reduce publication bias, improve reproducibility, and serve as a reference for forthcoming reporting of study results, we present the study protocol for this study herein. METHODS Participants were adults who completed cancer treatment that were recruited from the Ottawa Integrative Cancer Centre. Consenting and eligible participants received one 1:1 yoga therapy session (ie, 1 participant, 1 Yoga Therapist) and 6 weekly group-based yoga therapy sessions (ie, 2-3 participants, 1 Yoga Therapist). Participants completed assessments 7 times: 3 times prior to the program (ie, -6 weeks, -3 weeks, immediately prior to the 1:1 yoga therapy session), immediately following the 1:1 yoga therapy session, prior to the first group-based yoga therapy session, after the last group-based yoga therapy session, and at a 6-week follow-up. Hierarchical linear modeling will be used to test the average effects of the yoga therapy program across participants. DISCUSSION This study will explore several novel hypotheses, including whether yoga therapy can improve HRV and/or specific PROs among adults treated for cancer acutely (ie, during a 1:1 yoga therapy session) and/or through repeated exposure (ie, after completing 6 weeks of group-based yoga therapy). Although the findings will require confirmation or refutation in future trials, they may provide initial evidence that YT may benefit adults treated for cancer. TRIAL REGISTRATION ISRCTN registry, ISRCTN64763228. Registered on December 12, 2021. This trial was registered retrospectively. URL of trial registry record: https://www.isrctn.com/ISRCTN64763228.
Collapse
|
2
|
Abstract
In the context of single-case experimental designs, replication is crucial. On the one hand, the replication of the basic effect within a study is necessary for demonstrating experimental control. On the other hand, replication across studies is required for establishing the generality of the intervention effect. Moreover, the "replicability crisis" presents a more general context further emphasizing the need for assessing consistency in replications. In the current text, we focus on replication of effects within a study, and we specifically discuss the consistency of effects. Our proposal for assessing the consistency of effects refers to one of the promising data analytical techniques, multilevel models, also known as hierarchical linear models or mixed effects models. One option is to check, for each case in a multiple-baseline design, whether the confidence interval for the individual treatment effect excludes zero. This is relevant for assessing whether the effect is replicated as being non-null. However, we consider that it is more relevant and informative to assess, for each case, whether the confidence interval for the random effects includes zero (i.e., whether the fixed effect estimate is a plausible value for each individual effect). This is relevant for assessing whether the effect is consistent in size, with the additional requirement that the fixed effect itself is different from zero. The proposal for assessing consistency is illustrated with real data and is implemented in free user-friendly software.
Collapse
|
3
|
Improving spelling for at‐risk kindergartners through element skill frequency building. BEHAVIORAL INTERVENTIONS 2019. [DOI: 10.1002/bin.1701] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
4
|
Real-time monitoring technology in single-case experimental design research: Opportunities and challenges. Behav Res Ther 2019; 117:87-96. [DOI: 10.1016/j.brat.2018.11.017] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2018] [Revised: 10/08/2018] [Accepted: 11/26/2018] [Indexed: 12/16/2022]
|
5
|
Reprint of “The Single-Case Reporting Guideline In BEhavioural Interventions (SCRIBE) 2016: Explanation and Elaboration”. PRAT PSYCHOL 2019. [DOI: 10.1016/j.prps.2019.03.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
6
|
Statistical analysis in Small-N Designs: using linear mixed-effects modeling for evaluating intervention effectiveness. APHASIOLOGY 2019; 33:1-30. [PMID: 33012945 PMCID: PMC7531584 DOI: 10.1080/02687038.2018.1454884] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
BACKGROUND Advances in statistical methods and computing power have led to a renewed interest in addressing the statistical analysis challenges posed by Small-N Designs (SND). Linear mixed-effects modeling (LMEM) is a multiple regression technique that is flexible and suitable for SND and can provide standardized effect sizes and measures of statistical significance. AIMS Our primary goals are to: 1) explain LMEM at the conceptual level, situating it in the context of treatment studies, and 2) provide practical guidance for implementing LMEM in repeated measures SND. METHODS & PROCEDURES We illustrate an LMEM analysis, presenting data from a longitudinal training study of five individuals with acquired dysgraphia, analyzing both binomial (accuracy) and continuous (reaction time) repeated measurements. OUTCOMES & RESULTS The LMEM analysis reveals that both spelling accuracy and reaction time improved and, for accuracy, improved significantly more quickly under a training schedule with distributed, compared to clustered, practice. We present guidance on obtaining and interpreting various effect sizes and measures of statistical significance from LMEM, and include a simulation study comparing two p-value methods for generalized LMEM. CONCLUSION We provide a strong case for the application of LMEM to the analysis of training studies as a preferable alternative to visual analysis or other statistical techniques. When applied to a treatment dataset, the evidence supports that the approach holds up under the extreme conditions of small numbers of individuals, with repeated measures training data for both continuous (reaction time) and binomially distributed (accuracy) dependent measures. The approach provides standardized measures of effect sizes that are obtained through readily available and well-supported statistical packages, and provides statistically rigorous estimates of the expected average effect size of training effects, taking into account variability across both items and individuals.
Collapse
|
7
|
Abstract
This project identifies some difficulties when analyzing single-case data and showcases a new method, dynamic multilevel analysis (DMA). We re-analyze a published, meta-analysis of single-case interventions for participants with autism. Analytic difficulties include missing data, nested data, baseline trends, time periods, recency effects, many hypotheses' false positives, interactions among explanatory variables, indirect effects (including false negatives), and sampling errors. Furthermore, non-overlapping analyses can yield contested results, overvalue data near overlap boundaries, lose statistical power, and lack estimates of explained variance or unexplained residuals. To address these difficulties, DMA integrates several methods, including multilevel and time-series analyses. DMA re-analysis not only showed robust intervention effects, but also time-, outcome-, and intervention component-specific effects. Moreover, DMA informs the suitability of time hypotheses or meta-analysis, and DMA's components can be used separately, notably its time-series analyses for small samples (e.g., one participant). Hence, DMA can help researchers analyze single-case data more accurately.
Collapse
|
8
|
Computerized Sign Language-Based Literacy Training for Deaf and Hard-of-Hearing Children. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2017; 22:404-421. [PMID: 28961874 PMCID: PMC5881260 DOI: 10.1093/deafed/enx023] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Revised: 06/13/2017] [Accepted: 06/30/2017] [Indexed: 06/07/2023]
Abstract
Strengthening the connections between sign language and written language may improve reading skills in deaf and hard-of-hearing (DHH) signing children. The main aim of the present study was to investigate whether computerized sign language-based literacy training improves reading skills in DHH signing children who are learning to read. Further, longitudinal associations between sign language skills and developing reading skills were investigated. Participants were recruited from Swedish state special schools for DHH children, where pupils are taught in both sign language and spoken language. Reading skills were assessed at five occasions and the intervention was implemented in a cross-over design. Results indicated that reading skills improved over time and that development of word reading was predicted by the ability to imitate unfamiliar lexical signs, but there was only weak evidence that it was supported by the intervention. These results demonstrate for the first time a longitudinal link between sign-based abilities and word reading in DHH signing children who are learning to read. We suggest that the active construction of novel lexical forms may be a supramodal mechanism underlying word reading development.
Collapse
|
9
|
Analyzing Therapeutic Change Using Modified Brinley Plots: History, Construction, and Interpretation. Behav Ther 2017; 48:115-127. [PMID: 28077215 DOI: 10.1016/j.beth.2016.09.002] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2016] [Revised: 08/19/2016] [Accepted: 09/09/2016] [Indexed: 10/21/2022]
Abstract
The paper reviews the history, construction, and interpretation of modified Brinley plots, a scatter plot used in therapy outcome research to compare each individual participant's scores on the same dependent variable at Time 1 (normally pretreatment baseline; x-axis), with scores at selected times during or after treatment (y-axis). Since 1965 eponymously named Brinley plots have occasionally been used in experimental psychology to display group mean data. Between 1979 and 1995 a number of clinical researchers modified Brinley plots to show individuals' data but these plots have received little subsequent use. When constructed with orthogonal axes having the same origin and scale values, little or no change over time is shown by individuals' data points lying on or closely about the diagonal (450o) while the magnitude and direction of any improvement (or deterioration), outliers, and the extent of replication across cases shows via dispersion of points away from 450o. Interpretation is aided by displaying reliable change boundaries, clinical cutoffs, means, variances, confidence intervals, and effect sizes directly on the graph. Modified Brinley plots are directly informative about individual change during therapy in the context of concurrent change in others in the same (or a different) condition, clearly show if outcomes are replicated and if they are clinically significant, and make nomothetic group information, notably effect sizes, directly available. They usefully complement other forms of analysis in therapy outcome research.
Collapse
|
10
|
Recommendations for Choosing Single-Case Data Analytical Techniques. Behav Ther 2017; 48:97-114. [PMID: 28077224 DOI: 10.1016/j.beth.2016.04.008] [Citation(s) in RCA: 55] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/23/2015] [Revised: 04/13/2016] [Accepted: 04/30/2016] [Indexed: 11/29/2022]
Abstract
The current paper responds to the need to provide guidance to applied single-case researchers regarding the possibilities of data analysis. The amount of available single-case data analytical techniques has been growing during recent years and a general overview, comparing the possibilities of these techniques, is missing. Such an overview is provided that refers to techniques that yield results in terms of a raw or standardized difference and procedures related to regression analysis, as well as nonoverlap and percentage change indices. The comparison is provided in terms of the type of quantification provided, data features taken into account, conditions in which the techniques are appropriate, possibilities for meta-analysis, and evidence available on their performance. Moreover, we provide a set of recommendations for choosing appropriate analysis techniques, pointing at specific situations (aims, types of data, researchers' resources) and the data analytical techniques that are most appropriate in these situations. The recommendations are contextualized using a variety of published single-case data sets in order to illustrate a range of realistic situations that researchers have faced and may face in their investigations.
Collapse
|
11
|
Reporting single-case design studies: Advice in relation to the designs’ methodological and analytical peculiarities. ANUARIO DE PSICOLOGIA 2017. [DOI: 10.1016/j.anpsic.2017.05.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
12
|
How Can Single-Case Data Be Analyzed? Software Resources, Tutorial, and Reflections on Analysis. Behav Modif 2016; 41:179-228. [DOI: 10.1177/0145445516664307] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The present article aims to present a series of software developments in the quantitative analysis of data obtained via single-case experimental designs (SCEDs), as well as the tutorial describing these developments. The tutorial focuses on software implementations based on freely available platforms such as R and aims to bring statistical advances closer to applied researchers and help them become autonomous agents in the data analysis stage of a study. The range of analyses dealt with in the tutorial is illustrated on a typical single-case dataset, relying heavily on graphical data representations. We illustrate how visual and quantitative analyses can be used jointly, giving complementary information and helping the researcher decide whether there is an intervention effect, how large it is, and whether it is practically significant. To help applied researchers in the use of the analyses, we have organized the data in the different ways required by the different analytical procedures and made these data available online. We also provide Internet links to all free software available, as well as all the main references to the analytical techniques. Finally, we suggest that appropriate and informative data analysis is likely to be a step forward in documenting and communicating results and also for increasing the scientific credibility of SCEDs.
Collapse
|
13
|
Analyzing Two-Phase Single-Case Data with Non-overlap and Mean Difference Indices: Illustration, Software Tools, and Alternatives. Front Psychol 2016; 7:32. [PMID: 26834691 PMCID: PMC4720744 DOI: 10.3389/fpsyg.2016.00032] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2015] [Accepted: 01/08/2016] [Indexed: 11/13/2022] Open
Abstract
Two-phase single-case designs, including baseline evaluation followed by an intervention, represent the most clinically straightforward option for combining professional practice and research. However, unless they are part of a multiple-baseline schedule, such designs do not allow demonstrating a causal relation between the intervention and the behavior. Although the statistical options reviewed here cannot help overcoming this methodological limitation, we aim to make practitioners and applied researchers aware of the available appropriate options for extracting maximum information from the data. In the current paper, we suggest that the evaluation of behavioral change should include visual and quantitative analyses, complementing the substantive criteria regarding the practical importance of the behavioral change. Specifically, we emphasize the need to use structured criteria for visual analysis, such as the ones summarized in the What Works Clearinghouse Standards, especially if such criteria are complemented by visual aids, as illustrated here. For quantitative analysis, we focus on the non-overlap of all pairs and the slope and level change procedure, as they offer straightforward information and have shown reasonable performance. An illustration is provided of the use of these three pieces of information: visual, quantitative, and substantive. To make the use of visual and quantitative analysis feasible, open source software is referred to and demonstrated. In order to provide practitioners and applied researchers with a more complete guide, several analytical alternatives are commented on pointing out the situations (aims, data patterns) for which these are potentially useful.
Collapse
|
14
|
A brief primary care intervention to reduce fear of movement in chronic low back pain patients. Transl Behav Med 2015; 5:113-21. [PMID: 25729460 DOI: 10.1007/s13142-014-0292-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
Fear avoidance model of chronic pain-based interventions are effective, but have not been successfully implemented into primary care. It was hypothesized that speed walking times and key measures of the fear avoidance model would improve following the brief intervention delivered in primary care. A brief primary care-based intervention (PCB) that included a single educational session, speed walking (an in vivo desensitization exposure task), and visual performance feedback was designed to reduce fear avoidance beliefs and improve function in 4 patients with chronic low back pain. A multiple baseline across subjects with a changing criterion design indicated that speed walking times improved from baseline only after the PCB intervention was delivered. Six fear avoidance model outcome measures improved from baseline to end of study and five of six outcome measures improved from end of study to follow-up. This study provides evidence for the efficacy of a brief PCB fear avoidance intervention that was successfully implemented into a busy clinic for the treatment of chronic pain.
Collapse
|
15
|
|
16
|
Abstract
In the context of the evidence-based practices movement, the emphasis on computing effect sizes and combining them via meta-analysis does not preclude the demonstration of functional relations. For the latter aim, we propose to augment the visual analysis to add consistency to the decisions made on the existence of a functional relation without losing sight of the need for a methodological evaluation of what stimuli and reinforcement or punishment are used to control the behavior. Four options for quantification are reviewed, illustrated, and tested with simulated data. These quantifications include comparing the projected baseline with the actual treatment measurements, on the basis of either parametric or nonparametric statistics. The simulated data used to test the quantifications include nine data patterns in terms of the presence and type of effect and comprise ABAB and multiple-baseline designs. Although none of the techniques is completely flawless in terms of detecting a functional relation only when it is present but not when it is absent, an option based on projecting split-middle trend and considering data variability as in exploratory data analysis proves to be the best performer for most data patterns. We suggest that the information on whether a functional relation has been demonstrated should be included in meta-analyses. It is also possible to use as a weight the inverse of the data variability measure used in the quantification for assessing the functional relation. We offer an easy to use code for open-source software for implementing some of the quantifications.
Collapse
|
17
|
The influence of the design matrix on treatment effect estimates in the quantitative analyses of single-subject experimental design research. Behav Modif 2014; 38:665-704. [PMID: 24902590 DOI: 10.1177/0145445514535243] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The quantitative methods for analyzing single-subject experimental data have expanded during the last decade, including the use of regression models to statistically analyze the data, but still a lot of questions remain. One question is how to specify predictors in a regression model to account for the specifics of the design and estimate the effect size of interest. These quantitative effect sizes are used in retrospective analyses and allow synthesis of single-subject experimental study results which is informative for evidence-based decision making, research and theory building, and policy discussions. We discuss different design matrices that can be used for the most common single-subject experimental designs (SSEDs), namely, the multiple-baseline designs, reversal designs, and alternating treatment designs, and provide empirical illustrations. The purpose of this article is to guide single-subject experimental data analysts interested in analyzing and meta-analyzing SSED data.
Collapse
|
18
|
Abstract
In this editorial discussion we reflect on the issues addressed by, and arising from, the papers in this special issue on Single-Case Experimental Design (SCED) study methodology. We identify areas of consensus and disagreement regarding the conduct and analysis of SCED studies. Despite the long history of application of SCEDs in studies of interventions in clinical and educational settings, the field is still developing. There is an emerging consensus on methodological quality criteria for many aspects of SCEDs, but disagreement on what are the most appropriate methods of SCED data analysis. Our aim is to stimulate this ongoing debate and highlight issues requiring further attention from applied researchers and methodologists. In addition we offer tentative criteria to support decision-making in relation to the selection of analytical techniques in SCED studies. Finally, we stress that large-scale interdisciplinary collaborations, such as the current Special Issue, are necessary if SCEDs are going to play a significant role in the development of the evidence base for clinical practice.
Collapse
|