1
|
Verdam MGE. Power analyses for measurement model misspecification and response shift detection with structural equation modeling. Qual Life Res 2024; 33:1241-1256. [PMID: 38427288 PMCID: PMC11045588 DOI: 10.1007/s11136-024-03605-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/09/2024] [Indexed: 03/02/2024]
Abstract
PURPOSE Statistical power for response shift detection with structural equation modeling (SEM) is currently underreported. The present paper addresses this issue by providing worked-out examples and syntaxes of power calculations relevant for the statistical tests associated with the SEM approach for response shift detection. METHODS Power calculations and related sample-size requirements are illustrated for two modelling goals: (1) to detect misspecification in the measurement model, and (2) to detect response shift. Power analyses for hypotheses regarding (exact) overall model fit and the presence of response shift are demonstrated in a step-by-step manner. The freely available and user-friendly R-package lavaan and shiny-app 'power4SEM' are used for the calculations. RESULTS Using the SF-36 as an example, we illustrate the specification of null-hypothesis (H0) and alternative hypothesis (H1) models to calculate chi-square based power for the test on overall model fit, the omnibus test on response shift, and the specific test on response shift. For example, we show that a sample size of 506 is needed to reject an incorrectly specified measurement model, when the actual model has two-medium sized cross loadings. We also illustrate power calculation based on the RMSEA index for approximate fit, where H0 and H1 are defined in terms of RMSEA-values. CONCLUSION By providing accessible resources to perform power analyses and emphasizing the different power analyses associated with different modeling goals, we hope to facilitate the uptake of power analyses for response shift detection with SEM and thereby enhance the stringency of response shift research.
Collapse
Affiliation(s)
- M G E Verdam
- Department of Methodology and Statistics, Institute of Psychology, Leiden University, P.O. Box 9555, 2300 RB, Leiden, The Netherlands.
- Medical Psychology, Amsterdam UMC Location University of Amsterdam, Meibergdreef 9, Amsterdam, The Netherlands.
| |
Collapse
|
2
|
Sawatzky R, Sajobi TT, Russell L, Awosoga OA, Ademola A, Böhnke JR, Lawal O, Brobbey A, Lix LM, Anota A, Sebille V, Sprangers MAG, Verdam MGE. Response shift results of quantitative research using patient-reported outcome measures: a descriptive systematic review. Qual Life Res 2024; 33:293-315. [PMID: 37702809 PMCID: PMC10850024 DOI: 10.1007/s11136-023-03495-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/21/2023] [Indexed: 09/14/2023]
Abstract
PURPOSE The objective of this systematic review was to describe the prevalence and magnitude of response shift effects, for different response shift methods, populations, study designs, and patient-reported outcome measures (PROM)s. METHODS A literature search was performed in MEDLINE, PSYCINFO, CINAHL, EMBASE, Social Science Citation Index, and Dissertations & Theses Global to identify longitudinal quantitative studies that examined response shift using PROMs, published before 2021. The magnitude of each response shift effect (effect sizes, R-squared or percentage of respondents with response shift) was ascertained based on reported statistical information or as stated in the manuscript. Prevalence and magnitudes of response shift effects were summarized at two levels of analysis (study and effect levels), for recalibration and reprioritization/reconceptualization separately, and for different response shift methods, and population, study design, and PROM characteristics. Analyses were conducted twice: (a) including all studies and samples, and (b) including only unrelated studies and independent samples. RESULTS Of the 150 included studies, 130 (86.7%) detected response shift effects. Of the 4868 effects investigated, 793 (16.3%) revealed response shift. Effect sizes could be determined for 105 (70.0%) of the studies for a total of 1130 effects, of which 537 (47.5%) resulted in detection of response shift. Whereas effect sizes varied widely, most median recalibration effect sizes (Cohen's d) were between 0.20 and 0.30 and median reprioritization/reconceptualization effect sizes rarely exceeded 0.15, across the characteristics. Similar results were obtained from unrelated studies. CONCLUSION The results draw attention to the need to focus on understanding variability in response shift results: Who experience response shifts, to what extent, and under which circumstances?
Collapse
Affiliation(s)
- Richard Sawatzky
- School of Nursing, Trinity Western University, 22500 University Drive, Langley, BC, V2Y 1Y1, Canada.
- Centre for Advancing Health Outcomes, St. Paul's Hospital, Vancouver, Canada.
- University of Gothenburg Centre for Person‑Centred Care (GPCC), Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden.
| | - Tolulope T Sajobi
- Department of Community Health Sciences, University of Calgary, Calgary, Canada
| | - Lara Russell
- School of Nursing, Trinity Western University, 22500 University Drive, Langley, BC, V2Y 1Y1, Canada
- Centre for Advancing Health Outcomes, St. Paul's Hospital, Vancouver, Canada
| | | | - Ayoola Ademola
- Department of Community Health Sciences, University of Calgary, Calgary, Canada
| | - Jan R Böhnke
- School of Health Sciences, University of Dundee, Dundee, UK
| | - Oluwaseyi Lawal
- Department of Community Health Sciences, University of Calgary, Calgary, Canada
| | - Anita Brobbey
- Department of Community Health Sciences, University of Calgary, Calgary, Canada
| | - Lisa M Lix
- Department of Community Health Sciences, University of Manitoba, Winnipeg, Canada
| | - Amelie Anota
- Methodology and Quality of Life Unit in Oncology, University Hospital of Besançon, Besançon, France
| | - Véronique Sebille
- INSERM, MethodS in Patient-Centered Outcomes and HEalth ResEarch, SPHERE, Nantes Université, Université de Tours, CHU Nantes, 44000, Nantes, France
| | - Mirjam A G Sprangers
- Medical Psychology, Amsterdam UMC Location University of Amsterdam, Amsterdam, The Netherlands
- Mental Health, Amsterdam Public Health, Amsterdam, The Netherlands
| | - Mathilde G E Verdam
- Medical Psychology, Amsterdam UMC Location University of Amsterdam, Amsterdam, The Netherlands
- Mental Health, Amsterdam Public Health, Amsterdam, The Netherlands
- Department of Methodology and Statistics, Institute of Psychology, Leiden University, Leiden, The Netherlands
| |
Collapse
|
3
|
Blackmore AM, Mulhern B, Norman R, Reddihough D, Choong CS, Jacoby P, Downs J. How Well Does the EQ-5D-Y-5L Describe Children With Intellectual Disability?: "There's a Lot More to My Child Than That She Can't Wash or Dress Herself.". Value Health 2024; 27:190-198. [PMID: 38043713 DOI: 10.1016/j.jval.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 10/25/2023] [Accepted: 11/23/2023] [Indexed: 12/05/2023]
Abstract
OBJECTIVES The EQ-5D-5L is a generic health utility instrument for measuring health-related quality of life (HRQoL), with self-report and proxy report versions for children (EQ-5D-Y-5L). Children with intellectual disability (ID) are a heterogeneous population whose impairments and comorbidities place them at risk of poor HRQoL. This study aimed to describe the content validity and suitability for children with ID of a proxy report version of the EQ-5D-Y-5L as seen by their caregivers. METHODS A proxy report EQ-5D-Y-5L was administered to caregivers of children with ID. Using cognitive think-aloud interviewing, participants were encouraged to provide the reasoning for their choices, assess the questions' relevance, comprehensibility, and comprehensiveness, and comment on the tool's strengths and weaknesses. Qualitative content analysis used both directed (deductive) and conventional (inductive) methods. RESULTS There were 28 interviews with 30 caregivers of children with ID (aged 8-22 years, 17 boys, with autism spectrum disorder, cerebral palsy, Down syndrome, and rare genetic disorders). The EQ-5D-Y-5L was considered clear, concise, and largely relevant, but insufficiently comprehensive for this population. Interviewees sought clarification of the definition of HRQoL, whether it included unchanging impairments (vs fluctuating health states), and what basis of comparison to use (child or peer). Many interviewees suggested inclusion of questions for other domains, including communication and social engagement, equipment and human supports required, and a wider range of mental health questions. CONCLUSIONS The study suggests that further work is required to ensure accurate responses to the EQ-5D-Y-5L from caregivers of children with ID and to describe these children adequately.
Collapse
Affiliation(s)
| | - Brendan Mulhern
- Centre for Health Economics Research and Evaluation, University of Technology Sydney, Ultimo, NSW, Australia
| | - Richard Norman
- Population Health, Curtin University, Bentley, WA, Australia
| | - Dinah Reddihough
- Murdoch Children's Research Institute, Parkville, VIC, Australia
| | | | - Peter Jacoby
- Child Disability, Telethon Kids Institute, Nedlands, WA, Australia
| | - Jenny Downs
- Child Disability, Telethon Kids Institute, Nedlands, WA, Australia
| |
Collapse
|
4
|
Sprangers MAG, Sawatzky R, Vanier A, Böhnke JR, Sajobi T, Mayo NE, Lix LM, Verdam MGE, Oort FJ, Sébille V. Implications of the syntheses on definition, theory, and methods conducted by the Response Shift - in Sync Working Group. Qual Life Res 2023:10.1007/s11136-023-03347-8. [PMID: 36757572 PMCID: PMC10329073 DOI: 10.1007/s11136-023-03347-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2023] [Indexed: 02/10/2023]
Abstract
PURPOSE Our aim is to advance response shift research by explicating the implications of published syntheses by the Response Shift - in Sync Working Group in an integrative way and suggesting ways for improving the quality of future response shift studies. METHODS Members of the Working Group further discussed the syntheses of the literature on definitions, theoretical underpinnings, operationalizations, and response shift methods. They outlined areas in need of further explication and refinement, and delineated additional implications for future research. RESULTS First, the proposed response shift definition was further specified and its implications for the interpretation of results explicated in relation to former, published definitions. Second, the proposed theoretical model was further explained in relation to previous theoretical models and its implications for formulating research objectives highlighted. Third, ways to explore alternative explanations per response shift method and their implications for response shift detection and explanation were delineated. The implications of the diversity of the response shift methods for response shift research were presented. Fourth, the implications of the need to enhance the quality and reporting of the response shift studies for future research were sketched. CONCLUSION With our work, we intend to contribute to a common language regarding response shift definitions, theory, and methods. By elucidating some of the major implications of earlier work, we hope to advance response shift research.
Collapse
Affiliation(s)
- Mirjam A G Sprangers
- Department of Medical Psychology, Amsterdam UMC, Location University of Amsterdam, Meibergdreef 15, J3-211, 1105 AZ, Amsterdam, The Netherlands. .,Amsterdam Public Health, Mental Health, Amsterdam, The Netherlands.
| | - Richard Sawatzky
- School of Nursing, Trinity Western University, Langley, BC, Canada.,Centre for Health Evaluation and Outcome Sciences, University of British Columbia, Vancouver, BC, Canada
| | - Antoine Vanier
- INSERM, methodS in Patient-centered outcomes and HEalth ResEarch, SPHERE, Nantes Université, Université de Tours, CHU Nantes, F-44000, Nantes, France.,Pharmaceutical Drugs Assessment Department, Assessment and Access to Innovation Direction, Haute Autorité de Santé, Saint-Denis, France
| | - Jan R Böhnke
- School of Health Sciences, University of Dundee, Dundee, UK
| | - Tolulope Sajobi
- Department of Community Health Sciences, University of Calgary, Calgary, AB, Canada
| | - Nancy E Mayo
- Center for Outcomes Research and Evaluation, McGill University, Montreal, QC, Canada.,Division of Clinical Epidemiology, Department of Medicine, McGill University Health Centre Research Institute, Montreal, QC, Canada
| | - Lisa M Lix
- Department of Community Health Sciences, University of Manitoba, Winnipeg, MB, Canada
| | - Mathilde G E Verdam
- Department of Medical Psychology, Amsterdam UMC, Location University of Amsterdam, Meibergdreef 15, J3-211, 1105 AZ, Amsterdam, The Netherlands.,Department of Methodology and Statistics, Institute of Psychology, Leiden University, Leiden, The Netherlands
| | - Frans J Oort
- Research Institute of Child Development and Education, University of Amsterdam, Amsterdam, The Netherlands
| | - Véronique Sébille
- INSERM, methodS in Patient-centered outcomes and HEalth ResEarch, SPHERE, Nantes Université, Université de Tours, CHU Nantes, F-44000, Nantes, France
| | | |
Collapse
|
5
|
Vanier A, Leroy M, Hardouin JB. Toward a rigorous assessment of the statistical performances of methods to estimate the Minimal Important Difference of Patient-Reported Outcomes: a protocol for a large-scale simulation study. Methods 2022; 204:396-409. [PMID: 35202798 DOI: 10.1016/j.ymeth.2022.02.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 02/13/2022] [Accepted: 02/18/2022] [Indexed: 12/12/2022] Open
Abstract
Interpreting observed changes over time in Patient-Reported Outcomes (PRO) measures is still considered a challenge. Indeed, concluding an observed change at group level is statistically significant does not necessarily equate this change is meaningful from the perspective of the patient. To help interpret within and/or between group changes in the measure over time, the estimation of the Minimal Important Difference (MID) of the instrument - the smallest value that patients consider as a perceived change - is useful. In the last 30 years, a plethora of methods and estimators have been proposed to derive this MID value using clinical data from sample of patients. MIDs for hundreds of PROs have been estimated, with frequently a substantial variability in the results depending on the method used. Nonetheless, a rigorous assessment of the statistical performances of numerous proposed methods for estimating MIDs by experimental design such as Monte-Carlo study has never been performed. The purpose of this paper is to thoroughly depict a protocol for a large-scale simulation study designed to investigate the statistical performances, especially bias against a true populational value, of the common proposed estimators for MID. This paper depicts how investigated methods and estimators were retained after the conduct of a systematic review, the design of a conceptual model that formally defines what is the true populational MID value and the translation of the conceptual model into a model allowing the simulation of responses of items to a hypothetical PRO at two times of measurement along with the response to a Patient Global Rating of Change at the second time under the constraint of a known true MID value. A statistical analysis plan is depicted in order to conclude if working hypotheses on what could be appropriate MID estimators will be verified. Strengths, assumptions, and limits of the simulation model are exposed. Finally, we show how this protocol could be the basis for fostering future methodological research on the issue of interpreting changes in PRO measures.
Collapse
Affiliation(s)
- Antoine Vanier
- Inserm - University of Nantes - University of Tours, UMR U1246 Sphere "Methods in Patient-centered Outcomes and Health Research", Nantes 44200, France; Haute Autorité de Santé, Assessment and Access to Innovation Direction, Pharmaceutical Drugs Assessment Department, Saint-Denis 93210, France.
| | - Maxime Leroy
- University Hospital of Nantes, Unit of Methodology and Biostatistics, Nantes 44000, France
| | - Jean-Benoit Hardouin
- Inserm - University of Nantes - University of Tours, UMR U1246 Sphere "Methods in Patient-centered Outcomes and Health Research", Nantes 44200, France; University Hospital of Nantes, Unit of Methodology and Biostatistics, Nantes 44000, France
| |
Collapse
|
6
|
Affiliation(s)
- Richard L Skolasky
- Department of Orthopaedic Surgery and Physical Medicine & Rehabilitation, The Johns Hopkins University, 601 N. Caroline Street, JHOC 5223, Baltimore, MD, 21287, USA.
| |
Collapse
|
7
|
Affiliation(s)
- Bruce D Rapkin
- Division of Community Collaboration & Implementation Science, Department of Epidemiology and Population Health, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Carolyn E Schwartz
- DeltaQuest Foundation, Inc., 31 Mitchell Road, Concord, MA, 01742, USA.
- Departments of Medicine and Orthopaedic Surgery, Tufts University Medical School, Boston, MA, USA.
| |
Collapse
|
8
|
Vanier A, Oort FJ, McClimans L, Ow N, Gulek BG, Böhnke JR, Sprangers M, Sébille V, Mayo N. Response shift in patient-reported outcomes: definition, theory, and a revised model. Qual Life Res 2021; 30:3309-3322. [PMID: 33909187 PMCID: PMC8602159 DOI: 10.1007/s11136-021-02846-w] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/12/2021] [Indexed: 11/24/2022]
Abstract
Purpose The extant response shift definitions and theoretical response shift models, while helpful, also introduce predicaments and theoretical debates continue. To address these predicaments and stimulate empirical research, we propose a more specific formal definition of response shift and a revised theoretical model. Methods This work is an international collaborative effort and involved a critical assessment of the literature. Results Three main predicaments were identified. First, the formal definitions of response shift need further specification and clarification. Second, previous models were focused on explaining change in the construct intended to be measured rather than explaining the construct at multiple time points and neglected the importance of using at least two time points to investigate response shift. Third, extant models do not explicitly distinguish the measure from the construct. Here we define response shift as an effect occurring whenever observed change (e.g., change in patient-reported outcome measures (PROM) scores) is not fully explained by target change (i.e., change in the construct intended to be measured). The revised model distinguishes the measure (e.g., PROM) from the underlying target construct (e.g., quality of life) at two time points. The major plausible paths are delineated, and the underlying assumptions of this model are explicated. Conclusion It is our hope that this refined definition and model are useful in the further development of response shift theory. The model with its explicit list of assumptions and hypothesized relationships lends itself for critical, empirical examination. Future studies are needed to empirically test the assumptions and hypothesized relationships. Supplementary Information The online version contains supplementary material available at 10.1007/s11136-021-02846-w.
Collapse
Affiliation(s)
- Antoine Vanier
- Inserm - University of Nantes - University of Tours, UMR 1246 Sphere "Methods in Patient-Centered Outcomes and Health Research", Nantes, France. .,University Hospital of Tours - Inserm, CIC 1415, Unit of Methodology Biostatistics and Data-Management, Tours, France. .,Inserm U1246 Sphere, Institut de Recherche en Santé 2 - Université de Nantes, 22, Boulevard Bénoni-Goullin, 44200, Nantes, France.
| | - Frans J Oort
- University of Amsterdam, Research Institute of Child Development and Education, Amsterdam, The Netherlands
| | - Leah McClimans
- Department of Philosophy, University of South Carolina, Columbia, SC, USA
| | - Nikki Ow
- Center for Outcomes Research and Evaluation, McGill University, Montreal, Canada
| | - Bernice G Gulek
- Harborview Medical Center, University of Washington, Seattle, WA, USA.,College of Nursing, Washington State University, Spokane, WA, USA
| | - Jan R Böhnke
- School of Health Sciences, University of Dundee, Dundee, UK
| | - Mirjam Sprangers
- Department of Medical Psychology, Location AMC, Research Institute Amsterdam Public Health, Amsterdam University Medical Centers, Amsterdam, The Netherlands
| | - Véronique Sébille
- Inserm - University of Nantes - University of Tours, UMR 1246 Sphere "Methods in Patient-Centered Outcomes and Health Research", Nantes, France.,Unit of Methodology in Clinical Research and Biostatistics, University Hospital of Nantes, Nantes, France
| | - Nancy Mayo
- Center for Outcomes Research and Evaluation, McGill University, Montreal, Canada.,Division of Clinical Epidemiology, Department of Medicine, McGill University Health Centre Research Institute, Montreal, Canada
| | | |
Collapse
|
9
|
Sawatzky R, Kwon JY, Barclay R, Chauhan C, Frank L, van den Hout WB, Nielsen LK, Nolte S, Sprangers MAG; Response Shift – in Sync Working Group. Implications of response shift for micro-, meso-, and macro-level healthcare decision-making using results of patient-reported outcome measures. Qual Life Res 2021. [PMID: 33651278 DOI: 10.1007/s11136-021-02766-9] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/11/2021] [Indexed: 12/11/2022]
Abstract
PURPOSE Results of patient-reported outcome measures (PROMs) are increasingly used to inform healthcare decision-making. Research has shown that response shift can impact PROM results. As part of an international collaboration, our goal is to provide a framework regarding the implications of response shift at the level of patient care (micro), healthcare institute (meso), and healthcare policy (macro). METHODS Empirical evidence of response shift that can influence patients' self-reported health and preferences provided the foundation for development of the framework. Measurement validity theory, hermeneutic philosophy, and micro-, meso-, and macro-level healthcare decision-making informed our theoretical analysis. RESULTS At the micro-level, patients' self-reported health needs to be interpreted via dialogue with the clinician to avoid misinterpretation of PROM data due to response shift. It is also important to consider the potential impact of response shift on study results, when these are used to support decisions. At the meso-level, individual-level data should be examined for response shift before aggregating PROM data for decision-making related to quality improvement, performance monitoring, and accreditation. At the macro-level, critical reflection on the conceptualization of health is required to know whether response shift needs to be controlled for when PROM data are used to inform healthcare coverage. CONCLUSION Given empirical evidence of response shift, there is a critical need for guidelines and knowledge translation to avoid potential misinterpretations of PROM results and consequential biases in decision-making. Our framework with guiding questions provides a structure for developing strategies to address potential impacts of response shift at micro-, meso-, and macro-levels.
Collapse
|
10
|
Sébille V, Lix LM, Ayilara OF, Sajobi TT, Janssens ACJW, Sawatzky R, Sprangers MAG, Verdam MGE; Response Shift – in Sync Working Group. Critical examination of current response shift methods and proposal for advancing new methods. Qual Life Res 2021. [PMID: 33595827 DOI: 10.1007/s11136-020-02755-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/30/2020] [Indexed: 02/07/2023]
Abstract
Purpose This work is part of an international, interdisciplinary initiative to synthesize research on response shift in results of patient-reported outcome measures. The objective is to critically examine current response shift methods. We additionally propose advancing new methods that address the limitations of extant methods. Methods Based on literature reviews, this critical examination comprises design-based, qualitative, individualized, and preference-based methods, latent variable models, and other statistical methods. We critically appraised their definition, operationalization, the type of response shift they can detect, whether they can adjust for and explain response shift, their assumptions, and alternative explanations. Overall limitations requiring new methods were identified. Results We examined 11 methods that aim to operationalize response shift, by assessing change in the meaning of one’s self-evaluation. Six of these methods distinguish between change in observed measurements (observed change) and change in the construct that was intended to be measured (target change). The methods use either (sub)group-based or individual-level analysis, or a combination. All methods have underlying assumptions to be met and alternative explanations for the inferred response shift effects. We highlighted the need to address the interpretation of the results as response shift and proposed advancing new methods handling individual variation in change over time and multiple time points. Conclusion No single response shift method is optimal; each method has strengths and limitations. Additionally, extra steps need to be taken to correctly interpret the results. Advancing new methods and conducting computer simulation studies that compare methods are recommended to move response shift research forward. Supplementary Information The online version of this article (10.1007/s11136-020-02755-4) contains supplementary material, which is available to authorized users.
Collapse
|