1
|
Luo X, Zheng R, Zhang J, He J, Luo W, Jiang Z, Li Q. CT-based radiomics for predicting Ki-67 expression in lung cancer: a systematic review and meta-analysis. Front Oncol 2024; 14:1329801. [PMID: 38384802 PMCID: PMC10879429 DOI: 10.3389/fonc.2024.1329801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 01/22/2024] [Indexed: 02/23/2024] Open
Abstract
Background Radiomics, an emerging field, presents a promising avenue for the accurate prediction of biomarkers in different solid cancers. Lung cancer remains a significant global health challenge, contributing substantially to cancer-related mortality. Accurate assessment of Ki-67, a marker reflecting cellular proliferation, is crucial for evaluating tumor aggressiveness and treatment responsiveness, particularly in non-small cell lung cancer (NSCLC). Methods A systematic review and meta-analysis conducted following the preferred reporting items for systematic review and meta-analysis of diagnostic test accuracy studies (PRISMA-DTA) guidelines. Two authors independently conducted a literature search until September 23, 2023, in PubMed, Embase, and Web of Science. The focus was on identifying radiomics studies that predict Ki-67 expression in lung cancer. We evaluated quality using both Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) and the Radiomics Quality Score (RQS) tools. For statistical analysis in the meta-analysis, we used STATA 14.2 to assess sensitivity, specificity, heterogeneity, and diagnostic values. Results Ten retrospective studies were pooled in the meta-analysis. The findings demonstrated that the use of computed tomography (CT) scan-based radiomics for predicting Ki-67 expression in lung cancer exhibited encouraging diagnostic performance. Pooled sensitivity, specificity, and area under the curve (AUC) in training cohorts were 0.78, 0.81, and 0.85, respectively. In validation cohorts, these values were 0.78, 0.70, and 0.81. Quality assessment using QUADAS-2 and RQS indicated generally acceptable study quality. Heterogeneity in training cohorts, attributed to factors like contrast-enhanced CT scans and specific Ki-67 thresholds, was observed. Notably, publication bias was detected in the training cohort, indicating that positive results are more likely to be published than non-significant or negative results. Thus, journals are encouraged to publish negative results as well. Conclusion In summary, CT-based radiomics exhibit promise in predicting Ki-67 expression in lung cancer. While the results suggest potential clinical utility, additional research efforts should concentrate on enhancing diagnostic accuracy. This could pave the way for the integration of radiomics methods as a less invasive alternative to current procedures like biopsy and surgery in the assessment of Ki-67 expression.
Collapse
Affiliation(s)
- Xinmin Luo
- Department of Radiology, People’s Hospital of Yuechi County, Guang’an, Sichuan, China
| | - Renying Zheng
- Department of Oncology, People’s Hospital of Yuechi County, Guang’an, Sichuan, China
| | - Jiao Zhang
- Department of Radiology, People’s Hospital of Yuechi County, Guang’an, Sichuan, China
| | - Juan He
- Department of Radiology, People’s Hospital of Yuechi County, Guang’an, Sichuan, China
| | - Wei Luo
- Department of Radiology, People’s Hospital of Yuechi County, Guang’an, Sichuan, China
| | - Zhi Jiang
- Department of Radiology, People’s Hospital of Yuechi County, Guang’an, Sichuan, China
| | - Qiang Li
- Department of Radiology, Yuechi County Traditional Chinese Medicine Hospital in Sichuan Province, Guang’an, Sichuan, China
| |
Collapse
|
2
|
White SJ, Phua QS, Lu L, Yaxley KL, McInnes MDF, To MS. Heterogeneity in Systematic Reviews of Medical Imaging Diagnostic Test Accuracy Studies: A Systematic Review. JAMA Netw Open 2024; 7:e240649. [PMID: 38421646 PMCID: PMC10905313 DOI: 10.1001/jamanetworkopen.2024.0649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 01/09/2024] [Indexed: 03/02/2024] Open
Abstract
Importance Systematic reviews of medical imaging diagnostic test accuracy (DTA) studies are affected by between-study heterogeneity due to a range of factors. Failure to appropriately assess the extent and causes of heterogeneity compromises the interpretability of systematic review findings. Objective To assess how heterogeneity has been examined in medical imaging DTA studies. Evidence Review The PubMed database was searched for systematic reviews of medical imaging DTA studies that performed a meta-analysis. The search was limited to the 40 journals with highest impact factor in the radiology, nuclear medicine, and medical imaging category in the InCites Journal Citation Reports of 2021 to reach a sample size of 200 to 300 included studies. Descriptive analysis was performed to characterize the imaging modality, target condition, type of meta-analysis model used, strategies for evaluating heterogeneity, and sources of heterogeneity identified. Multivariable logistic regression was performed to assess whether any factors were associated with at least 1 source of heterogeneity being identified in the included meta-analyses. Methodological quality evaluation was not performed. Data analysis occurred from October to December 2022. Findings A total of 242 meta-analyses involving a median (range) of 987 (119-441 510) patients across a diverse range of disease categories and imaging modalities were included. The extent of heterogeneity was adequately described (ie, whether it was absent, low, moderate, or high) in 220 studies (91%) and was most commonly assessed using the I2 statistic (185 studies [76%]) and forest plots (181 studies [75%]). Heterogeneity was rated as moderate to high in 191 studies (79%). Of all included meta-analyses, 122 (50%) performed subgroup analysis and 87 (36%) performed meta-regression. Of the 242 studies assessed, 189 (78%) included 10 or more primary studies. Of these 189 studies, 60 (32%) did not perform meta-regression or subgroup analysis. Reasons for being unable to investigate sources of heterogeneity included inadequate reporting of primary study characteristics and a low number of included primary studies. Use of meta-regression was associated with identification of at least 1 source of variability (odds ratio, 1.90; 95% CI, 1.11-3.23; P = .02). Conclusions and Relevance In this systematic review of assessment of heterogeneity in medical imaging DTA meta-analyses, most meta-analyses were impacted by a moderate to high level of heterogeneity, presenting interpretive challenges. These findings suggest that, despite the development and availability of more rigorous statistical models, heterogeneity appeared to be incomplete, inconsistently evaluated, or methodologically questionable in many cases, which lessened the interpretability of the analyses performed; comprehensive heterogeneity assessment should be addressed at the author level by improving personal familiarity with appropriate statistical methodology for assessing heterogeneity and involving biostatisticians and epidemiologists in study design, as well as at the editorial level, by mandating adherence to methodologic standards in primary DTA studies and DTA meta-analyses.
Collapse
Affiliation(s)
- Samuel J. White
- Adelaide Medical School Faculty of Health and Medical Sciences, University of Adelaide, Adelaide, South Australia, Australia
| | - Qi Sheng Phua
- College of Medicine and Public Health, Flinders University, Bedford Park, South Australia, Australia
| | - Lucy Lu
- College of Medicine and Public Health, Flinders University, Bedford Park, South Australia, Australia
| | - Kaspar L. Yaxley
- South Australia Medical Imaging, Flinders Medical Centre, Bedford Park, South Australia, Australia
| | - Matthew D. F. McInnes
- Department of Radiology, Ottawa Hospital Research Institute, University of Ottawa, Ottawa, Ontario, Canada
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, University of Ottawa, Ottawa, Ontario, Canada
| | - Minh-Son To
- College of Medicine and Public Health, Flinders University, Bedford Park, South Australia, Australia
- South Australia Medical Imaging, Flinders Medical Centre, Bedford Park, South Australia, Australia
| |
Collapse
|
3
|
Rooprai P, Islam N, Salameh JP, Ebrahimzadeh S, Kazi A, Frank R, Ramsay T, Mathur MB, Absi M, Khalil A, Kazi S, Dawit H, Lam E, Fabiano N, McInnes MDF. Is There Evidence of P-Hacking in Imaging Research? Can Assoc Radiol J 2023; 74:497-507. [PMID: 36412994 PMCID: PMC10338063 DOI: 10.1177/08465371221139418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND P-hacking, the tendency to run selective analyses until they become significant, is prevalent in many scientific disciplines. PURPOSE This study aims to assess if p-hacking exists in imaging research. METHODS Protocol, data, and code available here https://osf.io/xz9ku/?view_only=a9f7c2d841684cb7a3616f567db273fa. We searched imaging journals Ovid MEDLINE from 1972 to 2021. Text mining using Python script was used to collect metadata: journal, publication year, title, abstract, and P-values from abstracts. One P-value was randomly sampled per abstract. We assessed for evidence of p-hacking using a p-curve, by evaluating for a concentration of P-values just below .05. We conducted a one-tailed binomial test (α = .05 level of significance) to assess whether there were more P-values falling in the upper range (e.g., .045 < P < .05) than in the lower range (e.g., .04 < P < .045). To assess variation in results introduced by our random sampling of a single P-value per abstract, we repeated the random sampling process 1000 times and pooled results across the samples. Analysis was done (divided into 10-year periods) to determine if p-hacking practices evolved over time. RESULTS Our search of 136 journals identified 967,981 abstracts. Text mining identified 293,687 P-values, and a total of 4105 randomly sampled P-values were included in the p-hacking analysis. The number of journals and abstracts that were included in the analysis as a fraction and percentage of the total number was, respectively, 108/136 (80%) and 4105/967,981 (.4%). P-values did not concentrate just under .05; in fact, there were more P-values falling in the lower range (e.g., .04 < P < .045) than falling just below .05 (e.g., .045 < P < .05), indicating lack of evidence for p-hacking. Time trend analysis did not identify p-hacking in any of the five 10-year periods. CONCLUSION We did not identify evidence of p-hacking in abstracts published in over 100 imaging journals since 1972. These analyses cannot detect all forms of p-hacking, and other forms of bias may exist in imaging research such as publication bias and selective outcome reporting.
Collapse
Affiliation(s)
- Paul Rooprai
- Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Nayaar Islam
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
| | - Jean-Paul Salameh
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Sanam Ebrahimzadeh
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada
| | | | - Robert Frank
- Department of Radiology, Faculty of Medicine, Ottawa Hospital, Ottawa, ON, Canada
| | - Tim Ramsay
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
| | - Maya B. Mathur
- Quantitative Sciences Unit and Department of Pediatrics, Stanford University, Ottawa, ON, Canada
| | - Marissa Absi
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Ahmed Khalil
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Sakib Kazi
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Haben Dawit
- Department of Radiology, Faculty of Medicine, Ottawa Hospital, Ottawa, ON, Canada
| | - Eric Lam
- Department of Radiology, Faculty of Medicine, Ottawa Hospital, Ottawa, ON, Canada
| | | | | |
Collapse
|
4
|
Sajid IM, Frost K, Paul AK. 'Diagnostic downshift': clinical and system consequences of extrapolating secondary care testing tactics to primary care. BMJ Evid Based Med 2022; 27:141-148. [PMID: 34099498 DOI: 10.1136/bmjebm-2020-111629] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/09/2021] [Indexed: 12/21/2022]
Abstract
Numerous drivers push specialist diagnostic approaches down to primary care ('diagnostic downshift'), intuitively welcomed by clinicians and patients. However, primary care's different population and processes result in under-recognised, unintended consequences. Testing performs poorer in primary care, with indication creep due to earlier, more undifferentiated presentation and reduced accuracy due to spectrum bias and the 'false-positive paradox'. In low-prevalence settings, tests without near-100% specificity have their useful yield eclipsed by greater incidental or false-positive findings. Ensuing cascades and multiplier effects can generate clinician workload, patient anxiety, further low-value tests, referrals, treatments and a potentially nocebic population 'disease' burden of unclear benefit. Increased diagnostics earlier in pathways can burden patients and stretch general practice (GP) workloads, inducing downstream service utilisation and unintended 'market failure' effects. Evidence is tenuous for reducing secondary care referrals, providing patient reassurance or meaningfully improving clinical outcomes. Subsequently, inflated investment in per capita testing, at a lower level in a healthcare system, may deliver diminishing or even negative economic returns. Test cost poorly represents 'value', neglecting under-recognised downstream consequences, which must be balanced against therapeutic yield. With lower positive predictive values, more tests are required per true diagnosis and cost-effectiveness is rarely robust. With fixed secondary care capacity, novel primary care testing is an added cost pressure, rarely reducing hospital activity. GP testing strategies require real-world evaluation, in primary care populations, of all downstream consequences. Test formularies should be scrutinised in view of the setting of care, with interventions to focus rational testing towards those with higher pretest probabilities, while improving interpretation and communication of results.
Collapse
Affiliation(s)
- Imran Mohammed Sajid
- NHS West London Clinical Commissioning Group, London, UK
- University of Global Health Equity, Kigali, Rwanda
| | - Kathleen Frost
- NHS Central London Clinical Commissioning Group, London, UK
| | - Ash K Paul
- NHS South West London Health and Care Partnership STP, London, UK
| |
Collapse
|
5
|
Frank RA, Fabiano N, Hallgrimson Z, Korevaar DA, Cohen JF, Bossuyt PM, Leeflang MMG, Moher D, McInnes MDF, Treanor L, Salameh JP, McGrath TA, Sharifabadi AD, Atyani A, Kazi S, Choo-Foo J, Asraoui N, Alabousi M, Ha W, Prager R, Rooprai P, Pozdnyakov A, John S, Osman H, Islam N, Li N, Gauthier ID, Absi M, Kraaijpoel N, Ebrahimzadeh S, Port JD, Stoker J, Klein JS, Schweitzer M. Association of Accuracy, Conclusions, and Reporting Completeness With Acceptance by Radiology Conferences and Journals. J Magn Reson Imaging 2022; 56:380-390. [PMID: 34997786 DOI: 10.1002/jmri.28046] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Revised: 11/29/2021] [Accepted: 12/16/2021] [Indexed: 01/10/2023] Open
Abstract
BACKGROUND Preferential publication of studies with positive findings can lead to overestimation of diagnostic test accuracy (i.e. publication bias). Understanding the contribution of the editorial process to publication bias could inform interventions to optimize the evidence guiding clinical decisions. PURPOSE/HYPOTHESIS To evaluate whether accuracy estimates, abstract conclusion positivity, and completeness of abstract reporting are associated with acceptance to radiology conferences and journals. STUDY TYPE Meta-research. POPULATION Abstracts submitted to radiology conferences (European Society of Gastrointestinal and Abdominal Radiology (ESGAR) and International Society for Magnetic Resonance in Medicine (ISMRM)) from 2008 to 2018 and manuscripts submitted to radiology journals (Radiology, Journal of Magnetic Resonance Imaging [JMRI]) from 2017 to 2018. Primary clinical studies evaluating sensitivity and specificity of a diagnostic imaging test in humans with available editorial decisions were included. ASSESSMENT Primary variables (Youden's index [YI > 0.8 vs. <0.8], abstract conclusion positivity [positive vs. neutral/negative], number of reported items on the Standards for Reporting of Diagnostic Accuracy Studies [STARD] for Abstract guideline) and confounding variables (prospective vs. retrospective/unreported, sample size, study duration, interobserver agreement assessment, subspecialty, modality) were extracted. STATISTICAL TESTS Multivariable logistic regression to obtain adjusted odds ratio (OR) as a measure of the association between the primary variables and acceptance by radiology conferences and journals; 95% confidence intervals (CIs) and P-values were obtained; the threshold for statistical significance was P < 0.05. RESULTS A total of 1000 conference abstracts (500 ESGAR and 500 ISMRM) and 1000 journal manuscripts (505 Radiology and 495 JMRI) were included. Conference abstract acceptance was not significantly associated with YI (adjusted OR = 0.97 for YI > 0.8; CI = 0.70-1.35), conclusion positivity (OR = 1.21 for positive conclusions; CI = 0.75-1.90) or STARD for Abstracts adherence (OR = 0.96 per unit increase in reported items; CI = 0.82-1.18). Manuscripts with positive abstract conclusions were less likely to be accepted by radiology journals (OR = 0.45; CI = 0.24-0.86), while YI (OR = 0.85; CI = 0.56-1.29) and STARD for Abstracts adherence (OR = 1.06; CI = 0.87-1.30) showed no significant association. Positive conclusions were present in 86.7% of submitted conference abstracts and 90.2% of journal manuscripts. DATA CONCLUSION Diagnostic test accuracy studies with positive findings were not preferentially accepted by the evaluated radiology conferences or journals. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Robert A Frank
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, Canada
| | - Nicholas Fabiano
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, Canada
| | - Zachary Hallgrimson
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, Canada
| | - Daniël A Korevaar
- Department of Respiratory Medicine, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, the Netherlands
| | - Jérémie F Cohen
- Department of Pediatrics and Inserm UMR 1153 (Centre of Research in Epidemiology and Statistics), Necker - Enfants Malades Hospital, Assistance Publique - Hôpitaux de Paris, Université de Paris, Paris, France
| | - Patrick M Bossuyt
- Epidemiology and Data Science, Amsterdam Public Health Research Institute, Amsterdam UMC, University of Amsterdam, Amsterdam, the Netherlands
| | - Mariska M G Leeflang
- Epidemiology and Data Science, Amsterdam Public Health Research Institute, Amsterdam UMC, University of Amsterdam, Amsterdam, the Netherlands
| | - David Moher
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, University of Ottawa, Ottawa, Canada
| | - Matthew D F McInnes
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, Canada.,Clinical Epidemiology Program, Ottawa Hospital Research Institute, University of Ottawa, Ottawa, Canada
| | | | - Lee Treanor
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, Canada
| | - Jean-Paul Salameh
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, University of Ottawa, Ottawa, Canada.,Faculty of Health Sciences, Queen's University, Kingston, Ontario, Canada
| | - Trevor A McGrath
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, Canada
| | | | - Almohannad Atyani
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, Canada
| | - Sakib Kazi
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, Canada
| | - Jade Choo-Foo
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, Canada
| | - Nabil Asraoui
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, Canada
| | | | - Winston Ha
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, Canada
| | - Ross Prager
- Faculty of Medicine, University of Ottawa, Ottawa, Canada
| | - Paul Rooprai
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, Canada
| | - Alex Pozdnyakov
- Department of Radiology, McMaster University, Hamilton, Canada
| | - Susan John
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, Canada
| | - Heba Osman
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, Canada
| | - Nayaar Islam
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, University of Ottawa, Ottawa, Canada
| | - Nicole Li
- Department of Radiology, McMaster University, Hamilton, Canada
| | - Isabelle D Gauthier
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, Canada
| | - Marissa Absi
- Department of Radiology, Faculty of Medicine, University of Ottawa, Ottawa, Canada
| | - Noëmie Kraaijpoel
- Department of Vascular Medicine, Amsterdam UMC, University of Amsterdam, Amsterdam, the Netherlands
| | - Sanam Ebrahimzadeh
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, University of Ottawa, Ottawa, Canada
| | - John D Port
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Jaap Stoker
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, University of Amsterdam, Amsterdam, the Netherlands
| | - Jeffrey S Klein
- Department of Radiology, University of Vermont Medical Center, Burlington, Vermont, USA
| | - Mark Schweitzer
- Department of Radiology, Wayne State University School of Medicine, Detroit, Michigan, USA
| |
Collapse
|
6
|
Kurth DA, Karmazyn BK, Waldrip CA, Chatfield M, Lockhart ME. ACR Appropriateness Criteria® Methodology. J Am Coll Radiol 2021; 18:S240-S250. [PMID: 34794586 DOI: 10.1016/j.jacr.2021.03.021] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 03/22/2021] [Indexed: 12/12/2022]
Abstract
The ACR Appropriateness Criteria® (AC) are evidence-based guidelines that guide physicians on appropriate image ordering. The AC development and revision process follows a transparent methodology that includes the systematic analysis of current medical literature from peer-reviewed journals and the application of well-established guidelines standards (the Institute of Medicine's Clinical Practice Guidelines We Can Trust) and methodologies (the RAND/UCLA Appropriateness Method and Grading of Recommendations Assessment, Development and Evaluation) to rate the benefits and potential risks, or appropriateness, of imaging and treatment procedures for specific clinical scenarios. In the October 2020 release, the methodology is applied in the development of 198 AC documents covering 1,760 clinical scenarios to make more than 8,815 recommendations, authored by more than 600 members representing multiple expert societies, and using more than 6,200 references. The ACR is recognized as a qualified provider-led entity by CMS for the development of appropriate use criteria. This paper describes the methodology and illustrates adherence to the process in the development of the AC.
Collapse
Affiliation(s)
- David A Kurth
- Senior Director of Guidelines Development, American College of Radiology, Reston, Virginia.
| | - Boaz K Karmazyn
- Riley Hospital for Children, Indiana University, Indianapolis, Indiana
| | - Christine A Waldrip
- Director of Appropriateness Criteria, American College of Radiology, Reston, Virginia
| | - Mythreyi Chatfield
- Executive Vice President for Quality and Safety, American College of Radiology, Reston, Virginia
| | - Mark E Lockhart
- Chair, Radiology Departmental Appointments, Promotions, and Tenure Committee, and Departmental Chief, Genitourinary Imaging, University of Alabama at Birmingham, Birmingham, Alabama; ACR Chair, Appropriateness Committee; Chair, Society of Radiologists in Ultrasound, Annual Meeting Program Committee; and Chair, Research Committee of AIUM Future Fund
| |
Collapse
|
7
|
Hallgrimson Z, Fabiano N, Salameh JP, Treanor LM, Frank RA, Sharifabadi AD, McInnes MDF. Tweeting Bias in Diagnostic Test Accuracy Research: Does Title or Conclusion Positivity Influence Dissemination? Can Assoc Radiol J 2021; 73:49-55. [PMID: 33874758 DOI: 10.1177/08465371211006420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
PURPOSE To examine if tweeting bias exists within imaging literature by determining if diagnostic test accuracy (DTA) studies with positive titles or conclusions are tweeted more than non-positive studies. METHODS DTA studies published between October 2011 to April 2016 were included. Positivity of titles and conclusions were assessed independently and in duplicate, with disagreements resolved by consensus. A negative binomial regression analysis controlling for confounding variables was performed to assess the relationship between title or conclusion positivity and tweets an article received in the 100 days post-publication. RESULTS 354 DTA studies were included. Twenty-four (7%) titles and 300 (85%) conclusions were positive (or positive with qualifier); 1 (0.3%) title and 23 (7%) conclusions were negative; and 329 (93%) titles and 26 (7%) conclusions were neutral. Studies with positive, negative, and neutral titles received a mean of 0.38, 0.00, and 0.45 tweets per study; while those with positive, negative, and neutral conclusions received a mean of 0.44, 0.61, and 0.38 tweets per study. Regression coefficients were -0.05 (SE 0.46) for positive relative to non-positive titles, and -0.09 (SE 0.31) for positive relative to non-positive conclusions. The positivity of the title (P = 0.91) or conclusion (P = 0.76) was not significantly associated with the number of tweets an article received. CONCLUSIONS The positivity of the title or conclusion for DTA studies does not influence the amount of tweets it receives suggesting that tweet bias is not present among imaging diagnostic accuracy studies. Study protocol available at https://osf.io/hdk2m/.
Collapse
Affiliation(s)
- Zachary Hallgrimson
- Department of Radiology, Faculty of Medicine, 6363University of Ottawa, Ontario, Canada
| | - Nicholas Fabiano
- Department of Radiology, Faculty of Medicine, 6363University of Ottawa, Ontario, Canada
| | - Jean-Paul Salameh
- Clinical Epidemiology Program, 10055Ottawa Hospital Research Institute, Ontario, Canada
| | - Lee M Treanor
- Department of Radiology, Faculty of Medicine, 6363University of Ottawa, Ontario, Canada
| | - Robert A Frank
- Department of Radiology, Faculty of Medicine, 6363University of Ottawa, Ontario, Canada
| | | | - Matthew D F McInnes
- Department of Radiology, Faculty of Medicine, 6363University of Ottawa, Ontario, Canada.,Clinical Epidemiology Program, 10055Ottawa Hospital Research Institute, Ontario, Canada
| |
Collapse
|