1
|
Schneider S, Hernandez R, Junghaenel DU, Jin H, Lee PJ, Gao H, Maupin D, Orriens B, Meijer E, Stone AA. Can you tell people's cognitive ability level from their response patterns in questionnaires? Behav Res Methods 2024:10.3758/s13428-024-02388-2. [PMID: 38528247 DOI: 10.3758/s13428-024-02388-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/02/2024] [Indexed: 03/27/2024]
Abstract
Questionnaires are ever present in survey research. In this study, we examined whether an indirect indicator of general cognitive ability could be developed based on response patterns in questionnaires. We drew on two established phenomena characterizing connections between cognitive ability and people's performance on basic cognitive tasks, and examined whether they apply to questionnaires responses. (1) The worst performance rule (WPR) states that people's worst performance on multiple sequential tasks is more indicative of their cognitive ability than their average or best performance. (2) The task complexity hypothesis (TCH) suggests that relationships between cognitive ability and performance increase with task complexity. We conceptualized items of a questionnaire as a series of cognitively demanding tasks. A graded response model was used to estimate respondents' performance for each item based on the difference between the observed and model-predicted response ("response error" scores). Analyzing data from 102 items (21 questionnaires) collected from a large-scale nationally representative sample of people aged 50+ years, we found robust associations of cognitive ability with a person's largest but not with their smallest response error scores (supporting the WPR), and stronger associations of cognitive ability with response errors for more complex than for less complex questions (supporting the TCH). Results replicated across two independent samples and six assessment waves. A latent variable of response errors estimated for the most complex items correlated .50 with a latent cognitive ability factor, suggesting that response patterns can be utilized to extract a rough indicator of general cognitive ability in survey research.
Collapse
Affiliation(s)
- Stefan Schneider
- Dornsife Center for Self-Report Science, and Center for Economic & Social Research, University of Southern California, 635 Downey Way, Los Angeles, CA, 90089-3332, USA.
- Department of Psychology, University of Southern California, Los Angeles, CA, USA.
- Leonard Davis School of Gerontology, University of Southern California, Los Angeles, CA, USA.
| | - Raymond Hernandez
- Dornsife Center for Self-Report Science, and Center for Economic & Social Research, University of Southern California, 635 Downey Way, Los Angeles, CA, 90089-3332, USA
| | - Doerte U Junghaenel
- Dornsife Center for Self-Report Science, and Center for Economic & Social Research, University of Southern California, 635 Downey Way, Los Angeles, CA, 90089-3332, USA
- Department of Psychology, University of Southern California, Los Angeles, CA, USA
- Leonard Davis School of Gerontology, University of Southern California, Los Angeles, CA, USA
| | - Haomiao Jin
- School of Health Sciences, Faculty of Health and Medical Sciences, University of Surrey, Guildford, UK
| | - Pey-Jiuan Lee
- Dornsife Center for Self-Report Science, and Center for Economic & Social Research, University of Southern California, 635 Downey Way, Los Angeles, CA, 90089-3332, USA
| | - Hongxin Gao
- School of Health Sciences, Faculty of Health and Medical Sciences, University of Surrey, Guildford, UK
| | - Danny Maupin
- School of Health Sciences, Faculty of Health and Medical Sciences, University of Surrey, Guildford, UK
| | - Bart Orriens
- Center for Economic and Social Research, University of Southern California, Los Angeles, CA, USA
| | - Erik Meijer
- Center for Economic and Social Research, University of Southern California, Los Angeles, CA, USA
| | - Arthur A Stone
- Dornsife Center for Self-Report Science, and Center for Economic & Social Research, University of Southern California, 635 Downey Way, Los Angeles, CA, 90089-3332, USA
- Department of Psychology, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
2
|
Kwesiga D, Malqvist M, Orach CG, Eriksson L, Blencowe H, Waiswa P. Exploring women's interpretations of survey questions on pregnancy and pregnancy outcomes: cognitive interviews in Iganga Mayuge, Uganda. Reprod Health 2024; 21:14. [PMID: 38287426 PMCID: PMC10826263 DOI: 10.1186/s12978-024-01745-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Accepted: 01/23/2024] [Indexed: 01/31/2024] Open
Abstract
BACKGROUND In 2021, Uganda's neonatal mortality rate was approximately 19 deaths per 1000 live births, with an estimated stillbirth rate of 15.1 per 1000 total births. Data are critical for indicating areas where deaths occur and why, hence driving improvements. Many countries rely on surveys like Demographic and Health Surveys (DHS), which face challenges with respondents' misinterpretation of questions. However, little is documented about this in Uganda. Cognitive interviews aim to improve questionnaires and assess participants' comprehension of items. Through cognitive interviews we explored women's interpretations of questions on pregnancy and pregnancy outcomes. METHODS In November 2021, we conducted cognitive interviews with 20 women in Iganga Mayuge health and demographic surveillance system site in eastern Uganda. We adapted the reproductive section of the DHS VIII women's questionnaire, purposively selected questions and used concurrent verbal probing. Participants had secondary school education and were English speaking. Cognition was measured through comparing instructions in the DHS interviewers' manual to participants' responses and researcher's knowledge. A qualitative descriptive approach to analysis was undertaken. RESULTS We report findings under the cognitive aspect of comprehension. Some questions were correctly understood, especially those with less technical terms or without multiple sections. Most participants struggled with questions asking whether the woman has her living biological children residing with her or not. Indeed, some thought it referred to how many living children they had. There were comprehension difficulties with long questions like 210 that asks about miscarriages, newborn deaths, and stillbirths together. Participants had varying meanings for miscarriages, while many misinterpreted stillbirth, not linking it to gestational age. Furthermore, even amongst educated women some survey questions were misunderstood. CONCLUSIONS Population surveys may misclassify, over or under report events around pregnancy and pregnancy outcomes. Interviewers should begin with a standard definition of key terms and ensure respondents understand these. Questions can be simplified through breaking up long sentences, while interviewer training should be modified to ensure they thoroughly understand key terms. We recommend cognitive interviews while developing survey tools, beyond basic pre-testing. Improving respondents' comprehension and thus response accuracy will increase reporting and data quality.
Collapse
Affiliation(s)
- Doris Kwesiga
- Department of Women's and Children's Health, Uppsala University, Uppsala, Sweden.
- Department of Health Policy, Planning and Management, School of Public Health, Makerere University, Kampala, Uganda.
| | - Mats Malqvist
- Department of Women's and Children's Health, Uppsala University, Uppsala, Sweden
| | - Christopher Garimoi Orach
- Department of Community Health and Behavioral Sciences, School of Public Health, Makerere University, Kampala, Uganda
| | - Leif Eriksson
- Department of Women's and Children's Health, Uppsala University, Uppsala, Sweden
| | - Hannah Blencowe
- Maternal, Adolescent, Reproductive and Child Health Centre (MARCH), London School of Hygiene and Tropical Medicine, London, UK
| | - Peter Waiswa
- Department of Health Policy, Planning and Management, School of Public Health, Makerere University, Kampala, Uganda
- Department of Global Public Health, Karolinska Institutet, Stockholm, Sweden
- Busoga Health Forum, Jinja, Uganda
| |
Collapse
|
3
|
Mavragani A, Bonner C, Muscat DM, Dunn AG, Harrison E, Dalmazzo J, Mouwad D, Aslani P, Shepherd HL, McCaffery KJ. Multiple Automated Health Literacy Assessments of Written Health Information: Development of the SHeLL (Sydney Health Literacy Lab) Health Literacy Editor v1. JMIR Form Res 2023; 7:e40645. [PMID: 36787164 PMCID: PMC9975914 DOI: 10.2196/40645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 12/13/2022] [Accepted: 12/29/2022] [Indexed: 02/15/2023] Open
Abstract
Producing health information that people can easily understand is challenging and time-consuming. Existing guidance is often subjective and lacks specificity. With advances in software that reads and analyzes text, there is an opportunity to develop tools that provide objective, specific, and automated guidance on the complexity of health information. This paper outlines the development of the SHeLL (Sydney Health Literacy Lab) Health Literacy Editor, an automated tool to facilitate the implementation of health literacy guidelines for the production of easy-to-read written health information. Target users were any person or organization that develops consumer-facing education materials, with or without prior experience with health literacy concepts. Anticipated users included health professionals, staff, and government and nongovernment agencies. To develop this tool, existing health literacy and relevant writing guidelines were collated. Items amenable to programmable automated assessment were incorporated into the Editor. A set of natural language processing methods were also adapted for use in the SHeLL Editor, though the approach was primarily procedural (rule-based). As a result of this process, the Editor comprises 6 assessments: readability (school grade reading score calculated using the Simple Measure of Gobbledygook (SMOG)), complex language (percentage of the text that contains public health thesaurus entries, words that are uncommon in English, or acronyms), passive voice, text structure (eg, use of long paragraphs), lexical density and diversity, and person-centered language. These are presented as global scores, with additional, more specific feedback flagged in the text itself. Feedback is provided in real-time so that users can iteratively revise and improve the text. The design also includes a "text preparation" mode, which allows users to quickly make adjustments to ensure accurate calculation of readability. A hierarchy of assessments also helps users prioritize the most important feedback. Lastly, the Editor has a function that exports the analysis and revised text. The SHeLL Health Literacy Editor is a new tool that can help improve the quality and safety of written health information. It provides objective, immediate feedback on a range of factors, complementing readability with other less widely used but important objective assessments such as complex and person-centered language. It can be used as a scalable intervention to support the uptake of health literacy guidelines by health services and providers of health information. This early prototype can be further refined by expanding the thesaurus and leveraging new machine learning methods for assessing the complexity of the written text. User-testing with health professionals is needed before evaluating the Editor's ability to improve the health literacy of written health information and evaluating its implementation into existing Australian health services.
Collapse
Affiliation(s)
| | - Carissa Bonner
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Danielle M Muscat
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Adam G Dunn
- Biomedical Informatics and Digital Health, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Eliza Harrison
- Biomedical Informatics and Digital Health, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Jason Dalmazzo
- Biomedical Informatics and Digital Health, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Dana Mouwad
- Western Sydney Local Health District, Health Literacy Hub, Sydney, Australia
| | - Parisa Aslani
- School of Pharmacy, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Heather L Shepherd
- Susan Wakil School of Nursing and Midwifery, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Kirsten J McCaffery
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| |
Collapse
|
4
|
Development of a measure to assess the quality of proxy decisions about research participation on behalf of adults lacking capacity to consent: the Combined Scale for Proxy Informed Consent Decisions (CONCORD scale). Trials 2022; 23:843. [PMID: 36195929 PMCID: PMC9531498 DOI: 10.1186/s13063-022-06787-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 09/23/2022] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND Recruitment of adults lacking the capacity to consent to trials requires the involvement of an alternative 'proxy' decision-maker, usually a family member. This can be challenging for family members, with some experiencing emotional and decisional burdens. Interventions to support proxy consent decisions in non-emergency settings are being developed. However, the ability to evaluate interventions is limited due to a lack of measures that capture outcomes of known importance, as identified through a core outcome set (COS). METHODS Using established measure development principles, a four-stage process was used to develop and refine items for a new measure of proxy decision quality: (1) findings from a recent scoping review and consensus study were reviewed to identify items for inclusion in the scale and any existing outcome measures, (2) assessment of content coverage by existing measures and identification of insufficiency, (3) construction of a novel scale, and (4) cognitive testing to explore comprehension of the scale and test its content adequacy through interviews with family members of people with impaired capacity. RESULTS A range of outcome measures associated with healthcare decision-making and informed consent decisions, such as the Decisional Conflict Scale, were identified in the scoping review. These measures were mapped against the key constructs identified in the COS to assess content coverage. Insufficient coverage of areas such as proxy-specific satisfaction and knowledge sufficiency by existing instruments indicated that a novel measure was needed. An initial version of a combined measure (the CONCORD scale) was drafted and tested during cognitive interviews with eleven family members. The interviews established comprehension, acceptability, feasibility, and content adequacy of the scale. Participants suggested re-phrasing and re-ordering some questions, leading to the creation of a revised version. CONCLUSIONS The CONCORD scale provides a brief measure to evaluate the quality of decisions made on behalf of an adult who lacks the capacity to consent in non-emergency settings, enabling the evaluation of interventions to improve proxy decision quality. Initial evaluation indicates it has content adequacy and is feasible to use. Further statistical validation work is being undertaken.
Collapse
|
5
|
Capturing richer information: On establishing the validity of an interval-valued survey response mode. Behav Res Methods 2021; 54:1240-1262. [PMID: 34494219 PMCID: PMC9170647 DOI: 10.3758/s13428-021-01635-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/18/2021] [Indexed: 11/14/2022]
Abstract
Obtaining quantitative survey responses that are both accurate and informative is crucial to a wide range of fields. Traditional and ubiquitous response formats such as Likert and visual analogue scales require condensation of responses into discrete or point values—but sometimes a range of options may better represent the correct answer. In this paper, we propose an efficient interval-valued response mode, whereby responses are made by marking an ellipse along a continuous scale. We discuss its potential to capture and quantify valuable information that would be lost using conventional approaches, while preserving a high degree of response efficiency. The information captured by the response interval may represent a possible response range—i.e., a conjunctive set, such as the real numbers between 3 and 6. Alternatively, it may reflect uncertainty in respect to a distinct response—i.e., a disjunctive set, such as a confidence interval. We then report a validation study, utilizing our recently introduced open-source software (DECSYS), to explore how interval-valued survey responses reflect experimental manipulations of several factors hypothesised to influence interval width, across multiple contexts. Results consistently indicate that respondents used interval widths effectively, and subjective participant feedback was also positive. We present this as initial empirical evidence for the efficacy and value of interval-valued response capture. Interestingly, our results also provide insight into respondents’ reasoning about the different aforementioned types of intervals—we replicate a tendency towards overconfidence for those representing epistemic uncertainty (i.e., disjunctive sets), but find intervals representing inherent range (i.e., conjunctive sets) to be well-calibrated.
Collapse
|
6
|
Thakur T, Chewning B. Pharmacists opioid risk and safety counseling practices: A latent class analysis approach. Res Social Adm Pharm 2021; 18:3013-3018. [PMID: 34353756 DOI: 10.1016/j.sapharm.2021.07.023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 06/07/2021] [Accepted: 07/29/2021] [Indexed: 11/24/2022]
Abstract
BACKGROUND The opioid crisis is a global public health issue, especially present in the United States. Limited research addresses pharmacists' opioid medication counseling practices particularly their risk and safety counseling practices. OBJECTIVES The objective of this paper is to categorize pharmacists based on their opioid risk and safety counseling practices to inform future interventions and research to improve practice. The percent of pharmacists falling into each of these underlying, unobservable subgroups is identified using latent class analysis. METHODS This study was conducted as a statewide survey of pharmacists using the modified Dilman technique. The survey consisted of ten items about pharmacists' opioid risk and safety practices when dispensing an opioid medication. Descriptive statistics were conducted followed by latent class analysis. This approach categorized pharmacists based on their responses to the survey items. RESULTS Responses from 216 pharmacists were used in this study. In the three-class model which was deemed the best fit, the first class shows a profile of pharmacists who counsel on almost all opioid risk and safety topics and composed 16.75% of the total respondent population. The second class shows a profile of pharmacists who hardly counsel on any opioid risks and safety topics and comprised 39.80% of the respondent population. The third class shows a profile of pharmacists counseling on opioid risk and safety topics mostly for new or long-term prescriptions, but not for refill or short-term prescriptions. This group constituted 43.45% of the respondent population. CONCLUSION This study identifies distinct classes of pharmacists in terms of the frequency with which their opioid counseling does or does not include key elements of risk and safety topics. A small minority usually include the risk and safety issues. Training and resource interventions targeting pharmacists who do not counsel patients about opioid risks are important to help them become more comfortable and adept as opioid risk and safety educators.
Collapse
Affiliation(s)
- Tanvee Thakur
- Social and Administrative Sciences Division, University of Wisconsin-Madison School of Pharmacy, Madison, WI, 53705, USA.
| | - Betty Chewning
- Social and Administrative Sciences in Pharmacy Division, University of Wisconsin-Madison School of Pharmacy, Madison, WI, 53705, USA.
| |
Collapse
|
7
|
Thakur T, Chewning B. Handout use to facilitate opioid risk and safety communication in community pharmacies. J Am Pharm Assoc (2003) 2021; 61:e96-e102. [PMID: 34176760 DOI: 10.1016/j.japh.2021.06.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 05/06/2021] [Accepted: 06/07/2021] [Indexed: 10/21/2022]
Abstract
BACKGROUND A number of opioid handouts exist for pharmacists to use for patient education. However, there is limited evidence about what pharmacists most want them to cover and how useful pharmacists perceive them to be. OBJECTIVES This study sought to (1) refine and revise an opioid safety handout to facilitate opioid risks and safety communication in community pharmacies and (2) assess the feasibility and acceptability of this tool using a statewide survey of community pharmacists. METHODS In phase 1, 8 community pharmacists were interviewed to refine and evaluate the opioid safety handout. In phase 2, a statewide sample of 700 pharmacists were surveyed to identify acceptability and feasibility of using the revised handout. Survey data were analyzed using descriptive statistics and multiple regression analysis. RESULTS A total of 140 surveys were returned from community pharmacists. Over 60% of pharmacists reported that the handout would be useful in counseling patients on opioid risks and safety and would be a good opioid education tool for patients. Pharmacists who had practiced for many years (P = 0.002) and pharmacists who discussed safe opioid disposal and storage regularly (P = 0.002) reported a higher likelihood of using the handout. Pharmacists were much more likely to counsel patients on opioid risks and safety using this handout for a long-term opioid prescription than for a short-term opioid prescription. CONCLUSION A participatory research design successfully refined a handout for opioid risks and safety counseling, which the majority of pharmacists evaluated as feasible and useful for community pharmacists.
Collapse
|
8
|
Flint K, Spaulding TJ. Examining the Relationship Between the Readability and Comprehensibility of Practice Test Questions and Failure Rates on Learner's Permit Knowledge Tests. Lang Speech Hear Serv Sch 2021; 52:554-567. [PMID: 33507826 DOI: 10.1044/2020_lshss-20-00042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose The readability and comprehensibility of Learner's Permit Knowledge Test practice questions and the relationship with test failure rates across states and the District of Columbia were examined. Method Failure rates were obtained from department representatives. Practice test questions were extracted from drivers' manuals and department websites and examined for readability using Flesch-Kincaid Grade Level and comprehensibility using Question Understanding Aid. The influence of readability and comprehensibility on test failure rates was explored. Results The average failure rate from reporting jurisdictions was 42.76%. In total, 11 out of 28 jurisdictions reported that test takers fail more than half the time, while 25 out of 28 reported that test takers fail at least a quarter of the time. While 33.09% of the variability in failure rates could be accounted for by syntactic complexity of the questions, 54.18% could be accounted for by the reading ease. Discussion With few exceptions, test failure rates are systematically high across the United States. The current findings suggest that these tests may be inappropriately biased against individuals with lower levels of literacy and language ability. Implications for test developers and clinicians are discussed.
Collapse
Affiliation(s)
- Kaitlyn Flint
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Mansfield
| | - Tammie J Spaulding
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Mansfield
| |
Collapse
|
9
|
Moreno Sancho F, Tsakos G, Brealey D, Boniface D, Needleman I. Development of a tool to assess oral health-related quality of life in patients hospitalised in critical care. Qual Life Res 2019; 29:559-568. [PMID: 31655973 PMCID: PMC6994456 DOI: 10.1007/s11136-019-02335-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/12/2019] [Indexed: 11/25/2022]
Abstract
AIMS AND OBJECTIVES Oral health deteriorates following hospitalisation in critical care units (CCU) but there are no validated measures to assess effects on oral health-related quality of life (OHQoL). The objectives of this study were (i) to develop a tool (CCU-OHQoL) to assess OHQoL amongst patients admitted to CCU, (ii) to collect data to analyse the validity, reliability and acceptability of the CCU-OHQoL tool and (iii) to investigate patient-reported outcome measures of OHQoL in patients hospitalised in a CCU. METHODS The project included three phases: (1) the development of an initial questionnaire informed by a literature review and expert panel, (2) testing of the tool in CCU (n = 18) followed by semi-structured interviews to assess acceptability, face and content validity and (3) final tool modification and testing of CCU-OHQoL questionnaire to assess validity and reliability. RESULTS The CCU-OHQoL showed good face and content validity and was quick to administer. Cronbach's alpha was 0.72 suggesting good internal consistency. For construct validity, the CCU-OHQoL was strongly and significantly correlated (correlation coefficients 0.71, 0.62 and 0.77, p < 0.01) with global OHQoL items. In the validation study, 37.8% of the participants reported a deterioration in self-reported oral health after CCU admission. Finally, 26.9% and 31% of the participants reported considerable negative impacts of oral health in their life overall and quality of life, respectively. CONCLUSIONS The new CCU-OHQoL tool may be of use in the assessment of oral health-related quality of life in CCU patients. Deterioration of OHQoL seems to be common in CCU patients.
Collapse
Affiliation(s)
- Federico Moreno Sancho
- Unit of Periodontology, UCL Eastman Dental Institute, 1st Floor Levy wing, 256 Gray's Inn Road, London, WC1X 8LD, UK.
| | - Georgios Tsakos
- Department of Epidemiology and Public Health, University College London, 1 - 19 Torrington Place, London, WC1E 7HB, UK
| | - David Brealey
- Bloomsbury Institute of Intensive Care Medicine, UCL, London, UK
| | - David Boniface
- Epidemiology and Public Health, UCL Eastman Dental Institute, University College London, 256 Gray's Inn Road, London, WC1X 8LD, UK
- Biostatistics, UCL Eastman Dental Institute, UCL, 256 Gray's Inn Road, London, WC1X 8LD, UK
| | - Ian Needleman
- Unit of Periodontology, UCL Eastman Dental Institute, 1st Floor Levy wing, 256 Gray's Inn Road, London, WC1X 8LD, UK
- Centre for Oral Health and Performance, UCL Eastman Dental Institute, UCL, 256 Gray's Inn Road, London, WC1X 8LD, UK
| |
Collapse
|
10
|
Rapley T, Girling M, Mair FS, Murray E, Treweek S, McColl E, Steen IN, May CR, Finch TL. Improving the normalization of complex interventions: part 1 - development of the NoMAD instrument for assessing implementation work based on normalization process theory (NPT). BMC Med Res Methodol 2018; 18:133. [PMID: 30442093 PMCID: PMC6238361 DOI: 10.1186/s12874-018-0590-y] [Citation(s) in RCA: 79] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Accepted: 10/29/2018] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND Understanding and measuring implementation processes is a key challenge for implementation researchers. This study draws on Normalization Process Theory (NPT) to develop an instrument that can be applied to assess, monitor or measure factors likely to affect normalization from the perspective of implementation participants. METHODS An iterative process of instrument development was undertaken using the following methods: theoretical elaboration, item generation and item reduction (team workshops); item appraisal (QAS-99); cognitive testing with complex intervention teams; theory re-validation with NPT experts; and pilot testing of instrument. RESULTS We initially generated 112 potential questionnaire items; these were then reduced to 47 through team workshops and item appraisal. No concerns about item wording and construction were raised through the item appraisal process. We undertook three rounds of cognitive interviews with professionals (n = 30) involved in the development, evaluation, delivery or reception of complex interventions. We identified minor issues around wording of some items; universal issues around how to engage with people at different time points in an intervention; and conceptual issues around the types of people for whom the instrument should be designed. We managed these by adding extra items (n = 6) and including a new set of option responses: 'not relevant at this stage', 'not relevant to my role' and 'not relevant to this intervention' and decided to design an instrument explicitly for those people either delivering or receiving an intervention. This version of the instrument had 53 items. Twenty-three people with a good working knowledge of NPT reviewed the items for theoretical drift. Items that displayed a poor alignment with NPT sub-constructs were removed (n = 8) and others revised or combined (n = 6). The final instrument, with 43 items, was successfully piloted with five people, with a 100% completion rate of items. CONCLUSION The process of moving through cycles of theoretical translation, item generation, cognitive testing, and theoretical (re)validation was essential for maintaining a balance between the theoretical integrity of the NPT concepts and the ease with which intended respondents could answer the questions. The final instrument could be easily understood and completed, while retaining theoretical validity. NoMAD represents a measure that can be used to understand implementation participants' experiences. It is intended as a measure that can be used alongside instruments that measure other dimensions of implementation activity, such as implementation fidelity, adoption, and readiness.
Collapse
Affiliation(s)
- Tim Rapley
- Department of Social Work, Education and Community Wellbeing, Northumbria University, Coach Lane Campus West, Newcastle upon Tyne, NE7 7XA UK
| | - Melissa Girling
- Institute of Health & Society, Newcastle University, Baddiley-Clark Building, Richardson Road, Newcastle-upon-Tyne, NE2 4AX UK
| | - Frances S. Mair
- Institute of Health and Wellbeing, University of Glasgow, 1 Horselethill Road, Glasgow, G12 9LX UK
| | - Elizabeth Murray
- Research Department of Primary Care and Population Health, University College London, Upper Floor 3, Royal Free Hospital, Rowland Hill Street, London, NW3 2PF UK
| | - Shaun Treweek
- Health Services Research Unit, University of Aberdeen, 3rd Floor, Health Sciences Building, Foresterhill, Aberdeen, AB25 2ZD UK
| | - Elaine McColl
- Institute of Health & Society, Newcastle University, Baddiley-Clark Building, Richardson Road, Newcastle-upon-Tyne, NE2 4AX UK
| | - Ian Nicholas Steen
- Institute of Health & Society, Newcastle University, Baddiley-Clark Building, Richardson Road, Newcastle-upon-Tyne, NE2 4AX UK
| | - Carl R. May
- Faculty of Public Health and Policy, London School of Hygiene and Tropical Medicine, 15-17 Tavistock Place, London, WC1H 9SH UK
| | - Tracy L. Finch
- Department of Nursing, Midwifery and Health, Northumbria University, Coach Lane Campus West, Newcastle upon Tyne, NE7 7XA UK
| |
Collapse
|
11
|
Karpen SC, Hagemeier NE. Assessing Faculty and Student Interpretations of AACP Survey Items with Cognitive Interviewing. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2017; 81:88. [PMID: 28720916 PMCID: PMC5508087 DOI: 10.5688/ajpe81588] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2016] [Accepted: 09/16/2016] [Indexed: 05/22/2023]
Abstract
Objective. To use cognitive interviewing techniques to determine faculty and student interpretation of a subset of items from the AACP faculty and graduating student surveys. Methods. Students and faculty were interviewed individually in a private room. The interviewer asked each respondent for his/her interpretation of 15 randomly selected items from the graduating student survey or 20 items from the faculty survey. Results. While many items were interpreted consistently by respondents, the researchers identified several items that were either difficult to interpret or produced differing interpretations. Conclusion. Several interpretational inconsistencies and ambiguities were discovered that could compromise the usefulness of certain survey items.
Collapse
Affiliation(s)
- Samuel C Karpen
- Gatton College of Pharmacy, East Tennessee State University, Johnson City, Tennessee
| | - Nicholas E Hagemeier
- Gatton College of Pharmacy, East Tennessee State University, Johnson City, Tennessee
| |
Collapse
|
12
|
Welk B, Morrow SA, Madarasz W, Potter P, Sequeira K. The conceptualization and development of a patient-reported neurogenic bladder symptom score. Res Rep Urol 2013; 5:129-37. [PMID: 24400244 PMCID: PMC3826942 DOI: 10.2147/rru.s51020] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Background There is no single patient-reported instrument that was developed specifically to assess symptoms and bladder-related consequences for neurogenic bladder dysfunction. The purpose of this study was to identify and consolidate items for a novel measurement tool for this population. Methods Item generation was based on a literature review of existing instruments, open-ended semistructured interviews with patients, and expert opinion. Judgment-based item reduction was performed by a multidisciplinary expert group. The proposed questionnaire was sent to external experts for review. Results Eight neurogenic quality of life measures and 29 urinary symptom-specific instruments were identified. From these, 266 relevant items were extracted and used in the creation of the new neurogenic symptom score. Qualitative interviews with 16 adult patients with neurogenic bladder dysfunction as a result of spinal cord injury, multiple sclerosis, or spina bifida were completed. Dominant themes included urinary incontinence, urinary tract infections, urgency, and bladder spasms. Using the literature review and interview data, 25 proposed items were reviewed by 12 external experts, and the questions evaluated based on importance on a scale of 1 (not important) to 5 (very important). Retained question domains had high mean importance ratings of 3.1 to 4.3 and good agreement with answer hierarchy. Conclusion The proposed neurogenic bladder symptom score is a novel patient-reported outcome measure. Further work is underway to perform a data-based item reduction and to assess the validity and reliability of this instrument.
Collapse
Affiliation(s)
- Blayne Welk
- Department of Surgery, Division of Urology, Western University, London, ON, Canada
| | - Sarah A Morrow
- Department of Clinical Neurosciences, Western University, London, ON, Canada
| | | | - Patrick Potter
- Department of Physical Medicine and Rehabilitation, Western University, London, ON, Canada
| | - Keith Sequeira
- Department of Physical Medicine and Rehabilitation, Western University, London, ON, Canada
| |
Collapse
|
13
|
Finch TL, Rapley T, Girling M, Mair FS, Murray E, Treweek S, McColl E, Steen IN, May CR. Improving the normalization of complex interventions: measure development based on normalization process theory (NoMAD): study protocol. Implement Sci 2013; 8:43. [PMID: 23578304 PMCID: PMC3637119 DOI: 10.1186/1748-5908-8-43] [Citation(s) in RCA: 94] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2013] [Accepted: 04/08/2013] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Understanding implementation processes is key to ensuring that complex interventions in healthcare are taken up in practice and thus maximize intended benefits for service provision and (ultimately) care to patients. Normalization Process Theory (NPT) provides a framework for understanding how a new intervention becomes part of normal practice. This study aims to develop and validate simple generic tools derived from NPT, to be used to improve the implementation of complex healthcare interventions. OBJECTIVES The objectives of this study are to: develop a set of NPT-based measures and formatively evaluate their use for identifying implementation problems and monitoring progress; conduct preliminary evaluation of these measures across a range of interventions and contexts, and identify factors that affect this process; explore the utility of these measures for predicting outcomes; and develop an online users' manual for the measures. METHODS A combination of qualitative (workshops, item development, user feedback, cognitive interviews) and quantitative (survey) methods will be used to develop NPT measures, and test the utility of the measures in six healthcare intervention settings. DISCUSSION The measures developed in the study will be available for use by those involved in planning, implementing, and evaluating complex interventions in healthcare and have the potential to enhance the chances of their implementation, leading to sustained changes in working practices.
Collapse
Affiliation(s)
- Tracy L Finch
- Institute of Health and Society, Newcastle University, Baddiley-Clark Building, Richardson Road, Newcastle-upon-Tyne NE2 4AX, UK.
| | | | | | | | | | | | | | | | | |
Collapse
|
14
|
Readability and comprehension of self-report binge eating measures. Eat Behav 2013; 14:167-70. [PMID: 23557814 PMCID: PMC3618665 DOI: 10.1016/j.eatbeh.2013.02.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/29/2012] [Revised: 01/04/2013] [Accepted: 02/14/2013] [Indexed: 11/20/2022]
Abstract
UNLABELLED The validity of self-report binge eating instruments among individuals with limited literacy is uncertain. This study aims to evaluate reading grade level and multiple domains of comprehension of 13 commonly used self-report assessments of binge eating for use in low-literacy populations. We evaluated self-report binge eating measures with respect to reading grade levels, measure length, formatting and linguistic problems. RESULTS All measures were written at a reading grade level higher than is recommended for patient materials (above the 5th to 6th grade level), and contained several challenging elements related to comprehension. Correlational analyses suggested that readability and comprehension elements were distinct contributors to measure difficulty. Individuals with binge eating who have low levels of educational attainment or limited literacy are often underrepresented in measure validation studies. Validity of measures and accurate assessment of symptoms depend on an individual's ability to read and comprehend instructions and items, and these may be compromised in populations with lower levels of literacy.
Collapse
|
15
|
Clinton V, van den Broek P. Interest, inferences, and learning from texts. LEARNING AND INDIVIDUAL DIFFERENCES 2012. [DOI: 10.1016/j.lindif.2012.07.004] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
16
|
McHugh RK, Rasmussen JL, Otto MW. Comprehension of self-report evidence-based measures of anxiety. Depress Anxiety 2011; 28:607-14. [PMID: 21618668 DOI: 10.1002/da.20827] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/07/2011] [Revised: 04/02/2011] [Accepted: 04/04/2011] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND Given their applicability in diverse settings and for a wide range of purposes, the generalizability of self-report symptom measures is particularly important. An understudied factor in the development and validation of self-report measures is the degree to which they are difficult to comprehend. This study evaluated the difficulty of self-report measures of anxiety with respect to several domains, including formatting, length, and linguistic problems. METHODS Ninety-two evidence based measures of anxiety were evaluated for comprehension level. RESULTS The majority of anxiety measures included challenging elements of formatting, linguistic ability, and readability. Measures of obsessive-compulsive disorder were associated with the highest level of comprehension (i.e., greatest difficulty). CONCLUSIONS The validity of self-report measures relies on the ability of respondents to understand the instructions and measure items. Factors related to the comprehension of self-report measures should be included among the basic psychometric properties in measure development and validation. Future research on the development of self-report measures that can be more broadly applicable across levels of education and literacy are of particular importance to research, clinical, and public health agendas.
Collapse
Affiliation(s)
- R Kathryn McHugh
- Department of Psychology, Boston University, Boston, Massachusetts 02215, USA.
| | | | | |
Collapse
|
17
|
Rogers ES, Spalding SL, Eckard AA, Wallace LS. Are patient-administered attention deficit hyperactivity disorder scales suitable for adults? J Atten Disord 2009; 13:168-74. [PMID: 18713845 DOI: 10.1177/1087054708323017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This primary purpose of this study was to examine cognitive complexity and readability of patient-administered ADHD scales. The secondary purpose was to estimate variation in readability of individual ADHD scale items. METHOD Using comprehensive search strategies, we identified eight English-language ADHD scales for inclusion in our study. A complete copy of each ADHD scale was obtained from the most current publication. Cognitive complexity of individual ADHD scale items were assessed using three techniques (number of items, number of words, and linguistic problems), while readability was calculated using the Flesch-Kinkaid formula. RESULTS Total number of ADHD scale items ranged from 6 to 66. The ADHD scale items averaged from a low of 4.4+/-2.9 to a high of 18.7+/-4.4 words. Most individual ADHD scale items had between 1 to 3 linguistic problems. Although readability of ADHD scales ranged from approximately 5th to 8th grade, there was notable variation in readability across individual statements and questions. CONCLUSION Formatting characteristics, including linguistic problems and high readability, may interfere with patients' ability to accurately complete ADHD scales.
Collapse
Affiliation(s)
- Edwin S Rogers
- University of Tennessee Graduate School of Medicine, Knoxville, TN 37920, USA
| | | | | | | |
Collapse
|
18
|
Omote S, Prado PSTD, Carrara K. Versão eletrônica de questionário e o controle de erros de resposta. ESTUDOS DE PSICOLOGIA (NATAL) 2005. [DOI: 10.1590/s1413-294x2005000300008] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
O artigo relata a análise de erros cometidos em questionário impresso e a aplicabilidade de uma versão eletrônica do mesmo questionário para o controle desses erros. Sessenta estudantes de pós-graduação em Educação responderam à versão eletrônica programada em Visual Basice outros 52 responderam à versão impressa. No questionário impresso, foram cometidos 95 erros, dos quais 28 não invalidam as respostas. Os demais erros podem levar à exclusão dos participantes que os cometeram, se todos os itens forem analisados rigorosamente de acordo com as instruções. Os erros cometidos na versão impressa podem ser controlados na versão eletrônica mediante adequada programação. Nenhuma dificuldade para responder a versão eletrônica foi identificada. As vantagens apontadas pelos respondentes e a possibilidade de controle total dos erros de resposta, aliadas à eliminação de erro de tabulação mediante a inserção automática das respostas em banco de dados, recomendam o uso de versão eletrônica de questionário.
Collapse
|
19
|
Shumway M, Sentell T, Unick G, Bamberg W. Cognitive complexity of self-administered depression measures. J Affect Disord 2004; 83:191-8. [PMID: 15555713 DOI: 10.1016/j.jad.2004.08.007] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2004] [Accepted: 08/19/2004] [Indexed: 11/18/2022]
Abstract
BACKGROUND Self-administered depression measures are important tools for research and practice, but their utility depends on the quality of the measurements they yield. Respondent comprehension is essential for meaningful measurement and prior studies have used readability indices to assess comprehensibility. Readability, however, is only one aspect of comprehension and empirical evidence shows that comprehension and measurement quality decrease as the cognitive complexity of standardized questions increases. Thus, cognitive complexity may provide a useful guide for selecting measures to maximize measurement quality. METHODS This study compared the cognitive complexity of 15 self-administered depression measures. Four aspects of cognitive complexity (length, readability, linguistic problems and number) were combined to characterize overall complexity. RESULTS Measures varied considerably. The most cognitively complex measures, likely to be most difficult to comprehend, were the Inventory to Diagnose Depression (IDD), the Hamilton Depression Inventory (HDI, Full and Short Versions), and the Beck Depression Inventory (BDI, BDI-II, BDI-PC). The least complex measures, likely to be easiest to comprehend, were the Harvard National Depression Screening Day Scale (HANDS), the Revised Hamilton Rating Scale for Depression Self-Report Problem Inventory (RHRSD) and the Zung Self-Rated Depression Scale (SDS). This multidimensional approach to assessing complexity and comprehensibility yielded different results than readability indices alone. LIMITATIONS This study did not include all self-administered depression measures and did not examine the relationship of cognitive complexity to actual responses to depression measures. CONCLUSIONS Since cognitive complexity is likely to limit comprehension and reduce measurement accuracy, it merits consideration in selection of self-administered depression measures.
Collapse
Affiliation(s)
- Martha Shumway
- UCSF Department of Psychiatry, 2727 Mariposa Street, Suite 100, San Francisco, CA 94100, USA.
| | | | | | | |
Collapse
|
20
|
Graesser AC, McNamara DS, Louwerse MM, Cai Z. Coh-Metrix: Analysis of text on cohesion and language. ACTA ACUST UNITED AC 2004; 36:193-202. [PMID: 15354684 DOI: 10.3758/bf03195564] [Citation(s) in RCA: 203] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Advances in computational linguistics and discourse processing have made it possible to automate many language- and text-processing mechanisms. We have developed a computer tool called Coh-Metrix, which analyzes texts on over 200 measures of cohesion, language, and readability. Its modules use lexicons, part-of-speech classifiers, syntactic parsers, templates, corpora, latent semantic analysis, and other components that are widely used in computational linguistics. After the user enters an English text, CohMetrix returns measures requested by the user. In addition, a facility allows the user to store the results of these analyses in data files (such as Text, Excel, and SPSS). Standard text readability formulas scale texts on difficulty by relying on word length and sentence length, whereas Coh-Metrix is sensitive to cohesion relations, world knowledge, and language and discourse characteristics.
Collapse
Affiliation(s)
- Arthur C Graesser
- Department of Psychology, University of Memphis, Memphis, Tennessee 38152-3230, USA.
| | | | | | | |
Collapse
|