1
|
Loskutova NY, Callen E, Pinckney RG, Staton EW, Pace WD. Feasibility, Implementation and Outcomes of Tablet-Based Two-Step Screening for Adult ADHD in Primary Care Practice. J Atten Disord 2021; 25:794-802. [PMID: 31014157 DOI: 10.1177/1087054719841133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Background: Primary care clinicians need to recognize and diagnose Adult ADHD (AADHD). We tested the feasibility and outcomes of a two-step screening process for AADHD in primary care. Methods: Seven practices screened patients using computerized surveys. Patients screening positive completed the AADHD Quality of Life (AAQoL). We explored the impact of screening on workflow and the acceptability to patients, and identified key barriers/opportunities to continuing screening. Results: Of the 711 participating adults, 188 (26.4%) screened positive, of which 32 (17.0%) had scores at least one standard deviation below means on two or more domains on the AAQoL (average 23.6 ± 7.3). These 32 individuals represented 4.5% of all participants. Clinicians were willing to screen, diagnose, and treat AADHD, but need additional resources. The screening process and technology was acceptable to patients and staff. Conclusions: A two-step screening method shows promise for routine screening for AADHD.
Collapse
|
2
|
Dannenberg MD, Bienvenida JCM, Bruce ML, Nguyen T, Hinn M, Matthews J, Bartels SJ, Elwyn G, Barr PJ. End-user views of an electronic encounter decision aid linked to routine depression screening. PATIENT EDUCATION AND COUNSELING 2019; 102:555-563. [PMID: 30497800 DOI: 10.1016/j.pec.2018.10.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2018] [Revised: 09/10/2018] [Accepted: 10/01/2018] [Indexed: 06/09/2023]
Abstract
OBJECTIVE Our aim was to gather community stakeholder input to inform the development of a digital system linking depression screening to decision support. METHODS Views and feature requirements were identified through (1) focus groups with patients and consumers with depression, and interviews with primary care clinicians and (2) usability sessions where patients and consumers used the current version of encounter decision aid (eDA) in a primary care waiting room. Qualitative data were analyzed using the framework method. RESULTS We conducted six focus groups with 15 participants, seven clinician interviews and 10 usability sessions. Patients were comfortable completing the Patient Health Questionnaire (PHQ-9) and receiving the electronic eDA in clinic. They felt this would allow patients to prepare for their visit and instill a sense of agency. Participants were comfortable receiving the PHQ-9 results and a subsequent eDA on a tablet in the waiting room. CONCLUSION Patients with and without depression, as well as clinicians, viewed linking the PHQ-9, results, and eDA positively. Patients were comfortable doing this in the clinic waiting room. PRACTICE IMPLICATIONS Linking depression decision support to screening was viewed positively by patients and clinicians, and could help overcome barriers to shared decision-making implementation in this population.
Collapse
Affiliation(s)
- Michelle D Dannenberg
- The Dartmouth Institute for Health Policy & Clinical Practice, Geisel School of Medicine at Dartmouth Lebanon, New Hampshire, USA
| | - John Carlo M Bienvenida
- The Dartmouth Institute for Health Policy & Clinical Practice, Geisel School of Medicine at Dartmouth Lebanon, New Hampshire, USA
| | - Martha L Bruce
- The Dartmouth Institute for Health Policy & Clinical Practice, Geisel School of Medicine at Dartmouth Lebanon, New Hampshire, USA; Departments of Psychiatry and Community and Family Medicine, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
| | | | | | | | - Stephen J Bartels
- The Dartmouth Institute for Health Policy & Clinical Practice, Geisel School of Medicine at Dartmouth Lebanon, New Hampshire, USA; Departments of Psychiatry and Community and Family Medicine, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
| | - Glyn Elwyn
- The Dartmouth Institute for Health Policy & Clinical Practice, Geisel School of Medicine at Dartmouth Lebanon, New Hampshire, USA
| | - Paul J Barr
- The Dartmouth Institute for Health Policy & Clinical Practice, Geisel School of Medicine at Dartmouth Lebanon, New Hampshire, USA.
| |
Collapse
|
3
|
Stoicea N, Koehler K, Scharre D, Bergese S. Cognitive self-assessment scales in surgical settings: Acceptability and feasibility. Best Pract Res Clin Anaesthesiol 2018; 32:303-309. [DOI: 10.1016/j.bpa.2018.08.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2018] [Revised: 08/01/2018] [Accepted: 08/01/2018] [Indexed: 01/09/2023]
|
4
|
Brodey BB, Gonzalez NL, Elkin KA, Sasiela WJ, Brodey IS. Assessing the Equivalence of Paper, Mobile Phone, and Tablet Survey Responses at a Community Mental Health Center Using Equivalent Halves of a 'Gold-Standard' Depression Item Bank. JMIR Ment Health 2017; 4:e36. [PMID: 28877861 PMCID: PMC5607438 DOI: 10.2196/mental.6805] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/14/2016] [Revised: 06/16/2017] [Accepted: 08/03/2017] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND The computerized administration of self-report psychiatric diagnostic and outcomes assessments has risen in popularity. If results are similar enough across different administration modalities, then new administration technologies can be used interchangeably and the choice of technology can be based on other factors, such as convenience in the study design. An assessment based on item response theory (IRT), such as the Patient-Reported Outcomes Measurement Information System (PROMIS) depression item bank, offers new possibilities for assessing the effect of technology choice upon results. OBJECTIVE To create equivalent halves of the PROMIS depression item bank and to use these halves to compare survey responses and user satisfaction among administration modalities-paper, mobile phone, or tablet-with a community mental health care population. METHODS The 28 PROMIS depression items were divided into 2 halves based on content and simulations with an established PROMIS response data set. A total of 129 participants were recruited from an outpatient public sector mental health clinic based in Memphis. All participants took both nonoverlapping halves of the PROMIS IRT-based depression items (Part A and Part B): once using paper and pencil, and once using either a mobile phone or tablet. An 8-cell randomization was done on technology used, order of technologies used, and order of PROMIS Parts A and B. Both Parts A and B were administered as fixed-length assessments and both were scored using published PROMIS IRT parameters and algorithms. RESULTS All 129 participants received either Part A or B via paper assessment. Participants were also administered the opposite assessment, 63 using a mobile phone and 66 using a tablet. There was no significant difference in item response scores for Part A versus B. All 3 of the technologies yielded essentially identical assessment results and equivalent satisfaction levels. CONCLUSIONS Our findings show that the PROMIS depression assessment can be divided into 2 equivalent halves, with the potential to simplify future experimental methodologies. Among community mental health care recipients, the PROMIS items function similarly whether administered via paper, tablet, or mobile phone. User satisfaction across modalities was also similar. Because paper, tablet, and mobile phone administrations yielded similar results, the choice of technology should be based on factors such as convenience and can even be changed during a study without adversely affecting the comparability of results.
Collapse
|
5
|
Sharma BB, Singh V. Assessment of the mind in chronic obstructive pulmonary disease: Mind or never mind. Lung India 2016; 33:125-8. [PMID: 27051096 PMCID: PMC4797427 DOI: 10.4103/0970-2113.177462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Affiliation(s)
- Bharat Bhushan Sharma
- Department of Medicine, Division of Allergy and Pulmonary Medicine, SMS Medical College Hospital, Jaipur, Rajasthan, India. E-mail:
| | - Virendra Singh
- Department of Respiratory Medicine, Asthma Bhawan, Jaipur, Rajasthan, India
| |
Collapse
|
6
|
Muehlhausen W, Doll H, Quadri N, Fordham B, O'Donohoe P, Dogar N, Wild DJ. Equivalence of electronic and paper administration of patient-reported outcome measures: a systematic review and meta-analysis of studies conducted between 2007 and 2013. Health Qual Life Outcomes 2015. [PMID: 26446159 DOI: 10.1186/s12955-015-0362-x.pubmedpmid:26446159;pubmedcentralpmcid:pmc4597451] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023] Open
Abstract
OBJECTIVE To conduct a systematic review and meta-analysis of the equivalence between electronic and paper administration of patient reported outcome measures (PROMs) in studies conducted subsequent to those included in Gwaltney et al's 2008 review. METHODS A systematic literature review of PROM equivalence studies conducted between 2007 and 2013 identified 1,997 records from which 72 studies met pre-defined inclusion/exclusion criteria. PRO data from each study were extracted, in terms of both correlation coefficients (ICCs, Spearman and Pearson correlations, Kappa statistics) and mean differences (standardized by the standard deviation, SD, and the response scale range). Pooled estimates of correlation and mean difference were estimated. The modifying effects of mode of administration, year of publication, study design, time interval between administrations, mean age of participants and publication type were examined. RESULTS Four hundred thirty-five individual correlations were extracted, these correlations being highly variable (I2 = 93.8) but showing generally good equivalence, with ICCs ranging from 0.65 to 0.99 and the pooled correlation coefficient being 0.88 (95% CI 0.87 to 0.88). Standardised mean differences for 307 studies were small and less variable (I2 = 33.5) with a pooled standardised mean difference of 0.037 (95% CI 0.031 to 0.042). Average administration mode/platform-specific correlations from 56 studies (61 estimates) had a pooled estimate of 0.88 (95% CI 0.86 to 0.90) and were still highly variable (I2 = 92.1). Similarly, average platform-specific ICCs from 39 studies (42 estimates) had a pooled estimate of 0.90 (95% CI 0.88 to 0.92) with an I2 of 91.5. After excluding 20 studies with outlying correlation coefficients (≥3SD from the mean), the I2 was 54.4, with the equivalence still high, the overall pooled correlation coefficient being 0.88 (95% CI 0.87 to 0.88). Agreement was found to be greater in more recent studies (p < 0.001), in randomized studies compared with non-randomised studies (p < 0.001), in studies with a shorter interval (<1 day) (p < 0.001), and in respondents of mean age 28 to 55 compared with those either younger or older (p < 0.001). In terms of mode/platform, paper vs Interactive Voice Response System (IVRS) comparisons had the lowest pooled agreement and paper vs tablet/touch screen the highest (p < 0.001). CONCLUSION The present study supports the conclusion of Gwaltney's previous meta-analysis showing that PROMs administered on paper are quantitatively comparable with measures administered on an electronic device. It also confirms the ISPOR Taskforce´s conclusion that quantitative equivalence studies are not required for migrations with minor change only. This finding should be reassuring to investigators, regulators and sponsors using questionnaires on electronic devicesafter migration using best practices. Although there is data indicating that migrations with moderate changes produce equivalent instrument versions, hence do not require quantitative equivalence studies, additional work is necessary to establish this. Furthermore, there is the need to standardize migration practices and reporting practices (i.e. include copies of tested instrument versions and screenshots) so that clear recommendations regarding equivalence testing can be made in the future.raising questions about the necessity of conducting equivalence testing moving forward.
Collapse
Affiliation(s)
- Willie Muehlhausen
- ICON Clinical Research, 6th Floor Seacourt Tower, West Way, Oxford, OX2 0JJ, UK.
| | - Helen Doll
- ICON Clinical Research, 6th Floor Seacourt Tower, West Way, Oxford, OX2 0JJ, UK.
| | - Nuz Quadri
- ICON Clinical Research, 6th Floor Seacourt Tower, West Way, Oxford, OX2 0JJ, UK.
| | - Bethany Fordham
- ICON Clinical Research, 6th Floor Seacourt Tower, West Way, Oxford, OX2 0JJ, UK.
| | - Paul O'Donohoe
- CRF Health, Brook House - 3rd Floor, 229-243 Shepherds Bush Road, Hammersmith, London, W6 7AN, UK.
| | - Nijda Dogar
- ICON Clinical Research, 6th Floor Seacourt Tower, West Way, Oxford, OX2 0JJ, UK.
| | - Diane J Wild
- ICON Clinical Research, 6th Floor Seacourt Tower, West Way, Oxford, OX2 0JJ, UK.
| |
Collapse
|
7
|
Muehlhausen W, Doll H, Quadri N, Fordham B, O'Donohoe P, Dogar N, Wild DJ. Equivalence of electronic and paper administration of patient-reported outcome measures: a systematic review and meta-analysis of studies conducted between 2007 and 2013. Health Qual Life Outcomes 2015; 13:167. [PMID: 26446159 PMCID: PMC4597451 DOI: 10.1186/s12955-015-0362-x] [Citation(s) in RCA: 155] [Impact Index Per Article: 17.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2014] [Accepted: 10/01/2015] [Indexed: 11/10/2022] Open
Abstract
OBJECTIVE To conduct a systematic review and meta-analysis of the equivalence between electronic and paper administration of patient reported outcome measures (PROMs) in studies conducted subsequent to those included in Gwaltney et al's 2008 review. METHODS A systematic literature review of PROM equivalence studies conducted between 2007 and 2013 identified 1,997 records from which 72 studies met pre-defined inclusion/exclusion criteria. PRO data from each study were extracted, in terms of both correlation coefficients (ICCs, Spearman and Pearson correlations, Kappa statistics) and mean differences (standardized by the standard deviation, SD, and the response scale range). Pooled estimates of correlation and mean difference were estimated. The modifying effects of mode of administration, year of publication, study design, time interval between administrations, mean age of participants and publication type were examined. RESULTS Four hundred thirty-five individual correlations were extracted, these correlations being highly variable (I2 = 93.8) but showing generally good equivalence, with ICCs ranging from 0.65 to 0.99 and the pooled correlation coefficient being 0.88 (95% CI 0.87 to 0.88). Standardised mean differences for 307 studies were small and less variable (I2 = 33.5) with a pooled standardised mean difference of 0.037 (95% CI 0.031 to 0.042). Average administration mode/platform-specific correlations from 56 studies (61 estimates) had a pooled estimate of 0.88 (95% CI 0.86 to 0.90) and were still highly variable (I2 = 92.1). Similarly, average platform-specific ICCs from 39 studies (42 estimates) had a pooled estimate of 0.90 (95% CI 0.88 to 0.92) with an I2 of 91.5. After excluding 20 studies with outlying correlation coefficients (≥3SD from the mean), the I2 was 54.4, with the equivalence still high, the overall pooled correlation coefficient being 0.88 (95% CI 0.87 to 0.88). Agreement was found to be greater in more recent studies (p < 0.001), in randomized studies compared with non-randomised studies (p < 0.001), in studies with a shorter interval (<1 day) (p < 0.001), and in respondents of mean age 28 to 55 compared with those either younger or older (p < 0.001). In terms of mode/platform, paper vs Interactive Voice Response System (IVRS) comparisons had the lowest pooled agreement and paper vs tablet/touch screen the highest (p < 0.001). CONCLUSION The present study supports the conclusion of Gwaltney's previous meta-analysis showing that PROMs administered on paper are quantitatively comparable with measures administered on an electronic device. It also confirms the ISPOR Taskforce´s conclusion that quantitative equivalence studies are not required for migrations with minor change only. This finding should be reassuring to investigators, regulators and sponsors using questionnaires on electronic devicesafter migration using best practices. Although there is data indicating that migrations with moderate changes produce equivalent instrument versions, hence do not require quantitative equivalence studies, additional work is necessary to establish this. Furthermore, there is the need to standardize migration practices and reporting practices (i.e. include copies of tested instrument versions and screenshots) so that clear recommendations regarding equivalence testing can be made in the future.raising questions about the necessity of conducting equivalence testing moving forward.
Collapse
Affiliation(s)
- Willie Muehlhausen
- ICON Clinical Research, 6th Floor Seacourt Tower, West Way, Oxford, OX2 0JJ, UK.
| | - Helen Doll
- ICON Clinical Research, 6th Floor Seacourt Tower, West Way, Oxford, OX2 0JJ, UK.
| | - Nuz Quadri
- ICON Clinical Research, 6th Floor Seacourt Tower, West Way, Oxford, OX2 0JJ, UK.
| | - Bethany Fordham
- ICON Clinical Research, 6th Floor Seacourt Tower, West Way, Oxford, OX2 0JJ, UK.
| | - Paul O'Donohoe
- CRF Health, Brook House - 3rd Floor, 229-243 Shepherds Bush Road, Hammersmith, London, W6 7AN, UK.
| | - Nijda Dogar
- ICON Clinical Research, 6th Floor Seacourt Tower, West Way, Oxford, OX2 0JJ, UK.
| | - Diane J Wild
- ICON Clinical Research, 6th Floor Seacourt Tower, West Way, Oxford, OX2 0JJ, UK.
| |
Collapse
|
8
|
Marcano Belisario JS, Jamsek J, Huckvale K, O'Donoghue J, Morrison CP, Car J. Comparison of self-administered survey questionnaire responses collected using mobile apps versus other methods. Cochrane Database Syst Rev 2015; 2015:MR000042. [PMID: 26212714 PMCID: PMC8152947 DOI: 10.1002/14651858.mr000042.pub2] [Citation(s) in RCA: 111] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
BACKGROUND Self-administered survey questionnaires are an important data collection tool in clinical practice, public health research and epidemiology. They are ideal for achieving a wide geographic coverage of the target population, dealing with sensitive topics and are less resource-intensive than other data collection methods. These survey questionnaires can be delivered electronically, which can maximise the scalability and speed of data collection while reducing cost. In recent years, the use of apps running on consumer smart devices (i.e., smartphones and tablets) for this purpose has received considerable attention. However, variation in the mode of delivering a survey questionnaire could affect the quality of the responses collected. OBJECTIVES To assess the impact that smartphone and tablet apps as a delivery mode have on the quality of survey questionnaire responses compared to any other alternative delivery mode: paper, laptop computer, tablet computer (manufactured before 2007), short message service (SMS) and plastic objects. SEARCH METHODS We searched MEDLINE, EMBASE, PsycINFO, IEEEXplore, Web of Science, CABI: CAB Abstracts, Current Contents Connect, ACM Digital, ERIC, Sociological Abstracts, Health Management Information Consortium, the Campbell Library and CENTRAL. We also searched registers of current and ongoing clinical trials such as ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform. We also searched the grey literature in OpenGrey, Mobile Active and ProQuest Dissertation & Theses. Lastly, we searched Google Scholar and the reference lists of included studies and relevant systematic reviews. We performed all searches up to 12 and 13 April 2015. SELECTION CRITERIA We included parallel randomised controlled trials (RCTs), crossover trials and paired repeated measures studies that compared the electronic delivery of self-administered survey questionnaires via a smartphone or tablet app with any other delivery mode. We included data obtained from participants completing health-related self-administered survey questionnaire, both validated and non-validated. We also included data offered by both healthy volunteers and by those with any clinical diagnosis. We included studies that reported any of the following outcomes: data equivalence; data accuracy; data completeness; response rates; differences in the time taken to complete a survey questionnaire; differences in respondent's adherence to the original sampling protocol; and acceptability to respondents of the delivery mode. We included studies that were published in 2007 or after, as devices that became available during this time are compatible with the mobile operating system (OS) framework that focuses on apps. DATA COLLECTION AND ANALYSIS Two review authors independently extracted data from the included studies using a standardised form created for this systematic review in REDCap. They then compared their forms to reach consensus. Through an initial systematic mapping on the included studies, we identified two settings in which survey completion took place: controlled and uncontrolled. These settings differed in terms of (i) the location where surveys were completed, (ii) the frequency and intensity of sampling protocols, and (iii) the level of control over potential confounders (e.g., type of technology, level of help offered to respondents). We conducted a narrative synthesis of the evidence because a meta-analysis was not appropriate due to high levels of clinical and methodological diversity. We reported our findings for each outcome according to the setting in which the studies were conducted. MAIN RESULTS We included 14 studies (15 records) with a total of 2275 participants; although we included only 2272 participants in the final analyses as there were missing data for three participants from one included study.Regarding data equivalence, in both controlled and uncontrolled settings, the included studies found no significant differences in the mean overall scores between apps and other delivery modes, and that all correlation coefficients exceeded the recommended thresholds for data equivalence. Concerning the time taken to complete a survey questionnaire in a controlled setting, one study found that an app was faster than paper, whereas the other study did not find a significant difference between the two delivery modes. In an uncontrolled setting, one study found that an app was faster than SMS. Data completeness and adherence to sampling protocols were only reported in uncontrolled settings. Regarding the former, an app was found to result in more complete records than paper, and in significantly more data entries than an SMS-based survey questionnaire. Regarding adherence to the sampling protocol, apps may be better than paper but no different from SMS. We identified multiple definitions of acceptability to respondents, with inconclusive results: preference; ease of use; willingness to use a delivery mode; satisfaction; effectiveness of the system informativeness; perceived time taken to complete the survey questionnaire; perceived benefit of a delivery mode; perceived usefulness of a delivery mode; perceived ability to complete a survey questionnaire; maximum length of time that participants would be willing to use a delivery mode; and reactivity to the delivery mode and its successful integration into respondents' daily routine. Finally, regardless of the study setting, none of the included studies reported data accuracy or response rates. AUTHORS' CONCLUSIONS Our results, based on a narrative synthesis of the evidence, suggest that apps might not affect data equivalence as long as the intended clinical application of the survey questionnaire, its intended frequency of administration and the setting in which it was validated remain unchanged. There were no data on data accuracy or response rates, and findings on the time taken to complete a self-administered survey questionnaire were contradictory. Furthermore, although apps might improve data completeness, there is not enough evidence to assess their impact on adherence to sampling protocols. None of the included studies assessed how elements of user interaction design, survey questionnaire design and intervention design might influence mode effects. Those conducting research in public health and epidemiology should not assume that mode effects relevant to other delivery modes apply to apps running on consumer smart devices. Those conducting methodological research might wish to explore the issues highlighted by this systematic review.
Collapse
Affiliation(s)
- José S Marcano Belisario
- School of Public Health, Imperial College LondonGlobal eHealth Unit, Department of Primary Care and Public HealthLondonUK
| | - Jan Jamsek
- University of LjubljanaFaculty of MedicineVrazov trg 2LjubljanaSlovenia1000
| | - Kit Huckvale
- School of Public Health, Imperial College LondonGlobal eHealth Unit, Department of Primary Care and Public HealthLondonUK
| | - John O'Donoghue
- School of Public Health, Imperial College LondonDepartment of Primary Care and Public HealthRoom 326, The Reynolds BuildingSt Dunstans RoadLondonUKW6 8RP
| | - Cecily P Morrison
- School of Public Health, Imperial College LondonGlobal eHealth Unit, Department of Primary Care and Public HealthLondonUK
| | - Josip Car
- Imperial College & Nanyang Technological UniversityLee Kong Chian School of Medicine3 Fusionopolis Link, #03‐08Nexus@one‐northSingaporeSingapore138543
| | | |
Collapse
|
9
|
Alfonsson S, Maathz P, Hursti T. Interformat reliability of digital psychiatric self-report questionnaires: a systematic review. J Med Internet Res 2014; 16:e268. [PMID: 25472463 PMCID: PMC4275488 DOI: 10.2196/jmir.3395] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2014] [Revised: 08/12/2014] [Accepted: 08/16/2014] [Indexed: 11/28/2022] Open
Abstract
Background Research on Internet-based interventions typically use digital versions of pen and paper self-report symptom scales. However, adaptation into the digital format could affect the psychometric properties of established self-report scales. Several studies have investigated differences between digital and pen and paper versions of instruments, but no systematic review of the results has yet been done. Objective This review aims to assess the interformat reliability of self-report symptom scales used in digital or online psychotherapy research. Methods Three databases (MEDLINE, Embase, and PsycINFO) were systematically reviewed for studies investigating the reliability between digital and pen and paper versions of psychiatric symptom scales. Results From a total of 1504 publications, 33 were included in the review, and interformat reliability of 40 different symptom scales was assessed. Significant differences in mean total scores between formats were found in 10 of 62 analyses. These differences were found in just a few studies, which indicates that the results were due to study effects and sample effects rather than unreliable instruments. The interformat reliability ranged from r=.35 to r=.99; however, the majority of instruments showed a strong correlation between format scores. The quality of the included studies varied, and several studies had insufficient power to detect small differences between formats. Conclusions When digital versions of self-report symptom scales are compared to pen and paper versions, most scales show high interformat reliability. This supports the reliability of results obtained in psychotherapy research on the Internet and the comparability of the results to traditional psychotherapy research. There are, however, some instruments that consistently show low interformat reliability, suggesting that these conclusions cannot be generalized to all questionnaires. Most studies had at least some methodological issues with insufficient statistical power being the most common issue. Future studies should preferably provide information about the transformation of the instrument into digital format and the procedure for data collection in more detail.
Collapse
Affiliation(s)
- Sven Alfonsson
- U-CARE, Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden.
| | | | | |
Collapse
|
10
|
Kean J, Malec JF, Cooper DB, Bowles AO. Utility of the Mayo-Portland Adaptability Inventory-4 for Self-Reported Outcomes in a Military Sample With Traumatic Brain Injury. Arch Phys Med Rehabil 2013; 94:2417-2424. [DOI: 10.1016/j.apmr.2013.08.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2013] [Revised: 08/06/2013] [Accepted: 08/11/2013] [Indexed: 10/26/2022]
|
11
|
Boswell JF, Kraus DR, Miller SD, Lambert MJ. Implementing routine outcome monitoring in clinical practice: Benefits, challenges, and solutions. Psychother Res 2013; 25:6-19. [DOI: 10.1080/10503307.2013.817696] [Citation(s) in RCA: 207] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
|
12
|
Goldstein LA, Connolly Gibbons MB, Thompson SM, Scott K, Heintz L, Green P, Thompson D, Crits-Christoph P. Outcome assessment via handheld computer in community mental health: consumer satisfaction and reliability. J Behav Health Serv Res 2011; 38:414-23. [PMID: 21107916 DOI: 10.1007/s11414-010-9229-4] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Computerized administration of mental health-related questionnaires has become relatively common, but little research has explored this mode of assessment in "real-world" settings. In the current study, 200 consumers at a community mental health center completed the BASIS-24 via handheld computer as well as paper and pen. Scores on the computerized BASIS-24 were compared with scores on the paper BASIS-24. Consumers also completed a questionnaire which assessed their level of satisfaction with the computerized BASIS-24. Results indicated that the BASIS-24 administered via handheld computer was highly correlated with pen and paper administration of the measure and was generally acceptable to consumers. Administration of the BASIS-24 via handheld computer may allow for efficient and sustainable outcomes assessment, adaptable research infrastructure, and maximization of clinical impact in community mental health agencies.
Collapse
|
13
|
Abstract
Major depressive disorder (MDD) is one of the most prevalent psychiatric disorders affecting children and adolescents. The significant psychiatric, social, and functional impairments associated with this disorder coupled with the high incidence of relapse indicate a need for continued efforts to enhance treatment. Current empirically supported treatments for childhood and adolescent MDD include psychotropic medications, psychotherapy, and a combination of both treatments, with selection of the most appropriate strategy depending on symptom severity. One strategy to enhance treatment outcome is the use of measurement-based care. This article provides a systematic review of measurement-based care in the treatment of childhood and adolescent MDD. It also presents a comprehensive analysis of widely used depression rating scales and discusses their utility in clinical practice. This review found evidence supporting the utility and benefit of depression rating scales to document depression severity in children and adolescents. We also found evidence suggesting that many of these scales are time efficient, and that both clinician-rated and self-rated scales provide accurate assessment of depressive symptomatology. Future research is warranted to examine the utility of measurement-based care in clinical practice with child and adolescent populations.
Collapse
|
14
|
Rose M, Bezjak A. Logistics of collecting patient-reported outcomes (PROs) in clinical practice: an overview and practical examples. Qual Life Res 2009; 18:125-36. [DOI: 10.1007/s11136-008-9436-0] [Citation(s) in RCA: 122] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2008] [Accepted: 12/15/2008] [Indexed: 11/25/2022]
|