1
|
Hamilton DG, Page MJ, Everitt S, Fraser H, Fidler F. Cancer researchers' experiences with and perceptions of research data sharing: Results of a cross-sectional survey. Account Res 2025; 32:530-557. [PMID: 38299475 DOI: 10.1080/08989621.2024.2308606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 01/18/2024] [Indexed: 02/02/2024]
Abstract
BACKGROUND Despite wide recognition of the benefits of sharing research data, public availability rates have not increased substantially in oncology or medicine more broadly over the last decade. METHODS We surveyed 285 cancer researchers to determine their prior experience with sharing data and views on known drivers and inhibitors. RESULTS We found that 45% of respondents had shared some data from their most recent empirical publication, with respondents who typically studied non-human research participants, or routinely worked with human genomic data, more likely to share than those who did not. A third of respondents added that they had previously shared data privately, with 74% indicating that doing so had also led to authorship opportunities or future collaborations for them. Journal and funder policies were reported to be the biggest general drivers toward sharing, whereas commercial interests, agreements with industrial sponsors and institutional policies were the biggest prohibitors. We show that researchers' decisions about whether to share data are also likely to be influenced by participants' desires. CONCLUSIONS Our survey suggests that increased promotion and support by research institutions, alongside greater championing of data sharing by journals and funders, may motivate more researchers in oncology to share their data.
Collapse
Affiliation(s)
- Daniel G Hamilton
- MetaMelb Research Group, School of BioSciences, University of Melbourne, Melbourne, Australia
- Melbourne Medical School, Faculty of Medicine, Dentistry & Health Sciences, University of Melbourne, Melbourne, Australia
| | - Matthew J Page
- Methods in Evidence Synthesis Unit, School of Public Health & Preventive Medicine, Monash University, Melbourne, Australia
| | - Sarah Everitt
- Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Australia
| | - Hannah Fraser
- MetaMelb Research Group, School of BioSciences, University of Melbourne, Melbourne, Australia
| | - Fiona Fidler
- MetaMelb Research Group, School of BioSciences, University of Melbourne, Melbourne, Australia
- School of History & Philosophy of Sciences, University of Melbourne, Melbourne, Australia
| |
Collapse
|
2
|
Aromiwura AA, Settle T, Umer M, Joshi J, Shotwell M, Mattumpuram J, Vorla M, Sztukowska M, Contractor S, Amini A, Kalra DK. Artificial intelligence in cardiac computed tomography. Prog Cardiovasc Dis 2023; 81:54-77. [PMID: 37689230 DOI: 10.1016/j.pcad.2023.09.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 09/04/2023] [Indexed: 09/11/2023]
Abstract
Artificial Intelligence (AI) is a broad discipline of computer science and engineering. Modern application of AI encompasses intelligent models and algorithms for automated data analysis and processing, data generation, and prediction with applications in visual perception, speech understanding, and language translation. AI in healthcare uses machine learning (ML) and other predictive analytical techniques to help sort through vast amounts of data and generate outputs that aid in diagnosis, clinical decision support, workflow automation, and prognostication. Coronary computed tomography angiography (CCTA) is an ideal union for these applications due to vast amounts of data generation and analysis during cardiac segmentation, coronary calcium scoring, plaque quantification, adipose tissue quantification, peri-operative planning, fractional flow reserve quantification, and cardiac event prediction. In the past 5 years, there has been an exponential increase in the number of studies exploring the use of AI for cardiac computed tomography (CT) image acquisition, de-noising, analysis, and prognosis. Beyond image processing, AI has also been applied to improve the imaging workflow in areas such as patient scheduling, urgent result notification, report generation, and report communication. In this review, we discuss algorithms applicable to AI and radiomic analysis; we then present a summary of current and emerging clinical applications of AI in cardiac CT. We conclude with AI's advantages and limitations in this new field.
Collapse
Affiliation(s)
| | - Tyler Settle
- Medical Imaging Laboratory, Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY, USA
| | - Muhammad Umer
- Division of Cardiology, Department of Medicine, University of Louisville, Louisville, KY, USA
| | - Jonathan Joshi
- Center for Artificial Intelligence in Radiological Sciences (CAIRS), Department of Radiology, University of Louisville, Louisville, KY, USA
| | - Matthew Shotwell
- Division of Cardiology, Department of Medicine, University of Louisville, Louisville, KY, USA
| | - Jishanth Mattumpuram
- Division of Cardiology, Department of Medicine, University of Louisville, Louisville, KY, USA
| | - Mounica Vorla
- Division of Cardiology, Department of Medicine, University of Louisville, Louisville, KY, USA
| | - Maryta Sztukowska
- Clinical Trials Unit, University of Louisville, Louisville, KY, USA; University of Information Technology and Management, Rzeszow, Poland
| | - Sohail Contractor
- Center for Artificial Intelligence in Radiological Sciences (CAIRS), Department of Radiology, University of Louisville, Louisville, KY, USA
| | - Amir Amini
- Medical Imaging Laboratory, Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY, USA; Center for Artificial Intelligence in Radiological Sciences (CAIRS), Department of Radiology, University of Louisville, Louisville, KY, USA
| | - Dinesh K Kalra
- Division of Cardiology, Department of Medicine, University of Louisville, Louisville, KY, USA; Center for Artificial Intelligence in Radiological Sciences (CAIRS), Department of Radiology, University of Louisville, Louisville, KY, USA.
| |
Collapse
|
3
|
Ohmann C, Moher D, Siebert M, Motschall E, Naudet F. Status, use and impact of sharing individual participant data from clinical trials: a scoping review. BMJ Open 2021; 11:e049228. [PMID: 34408052 PMCID: PMC8375721 DOI: 10.1136/bmjopen-2021-049228] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Accepted: 07/20/2021] [Indexed: 11/16/2022] Open
Abstract
OBJECTIVES To explore the impact of data-sharing initiatives on the intent to share data, on actual data sharing, on the use of shared data and on research output and impact of shared data. ELIGIBILITY CRITERIA All studies investigating data-sharing practices for individual participant data (IPD) from clinical trials. SOURCES OF EVIDENCE We searched the Medline database, the Cochrane Library, the Science Citation Index Expanded and the Social Sciences Citation Index via Web of Science, and preprints and proceedings of the International Congress on Peer Review and Scientific Publication. In addition, we inspected major clinical trial data-sharing platforms, contacted major journals/publishers, editorial groups and some funders. CHARTING METHODS Two reviewers independently extracted information on methods and results from resources identified using a standardised questionnaire. A map of the extracted data was constructed and accompanied by a narrative summary for each outcome domain. RESULTS 93 studies identified in the literature search (published between 2001 and 2020, median: 2018) and 5 from additional information sources were included in the scoping review. Most studies were descriptive and focused on early phases of the data-sharing process. While the willingness to share IPD from clinical trials is extremely high, actual data-sharing rates are suboptimal. A survey of journal data suggests poor to moderate enforcement of the policies by publishers. Metrics provided by platforms suggest that a large majority of data remains unrequested. When requested, the purpose of the reuse is more often secondary analyses and meta-analyses, rarely re-analyses. Finally, studies focused on the real impact of data-sharing were rare and used surrogates such as citation metrics. CONCLUSIONS There is currently a gap in the evidence base for the impact of IPD sharing, which entails uncertainties in the implementation of current data-sharing policies. High level evidence is needed to assess whether the value of medical research increases with data-sharing practices.
Collapse
Affiliation(s)
- Christian Ohmann
- European Clinical Research Infrastructure Network, Paris, France
| | - David Moher
- Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
| | - Maximilian Siebert
- CHU Rennes, CIC 1414 (Centre d'Investigation Clinique de Rennes), University Rennes, Rennes, France
| | - Edith Motschall
- Institute of Medical Biometry and Statistics, Faculty of Medicine and Medical Center - University of Freiburg, Freiburg, Baden-Württemberg, Germany
| | - Florian Naudet
- CHU Rennes, INSERM CIC 1414 (Centre d'Investigation Clinique de Rennes), University Rennes, Rennes, Bretagne, France
| |
Collapse
|
4
|
Wright BD, Vo N, Nolan J, Johnson AL, Braaten T, Tritz D, Vassar M. An analysis of key indicators of reproducibility in radiology. Insights Imaging 2020; 11:65. [PMID: 32394098 PMCID: PMC7214585 DOI: 10.1186/s13244-020-00870-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Accepted: 04/02/2020] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND Given the central role of radiology in patient care, it is important that radiological research is grounded in reproducible science. It is unclear whether there is a lack of reproducibility or transparency in radiologic research. PURPOSE To analyze published radiology literature for the presence or lack of key indicators of reproducibility. METHODS This cross-sectional retrospective study was performed by conducting a search of the National Library of Medicine (NLM) for publications contained within journals in the field of radiology. Our inclusion criteria were being MEDLINE indexed, written in English, and published from January 1, 2014, to December 31, 2018. We randomly sampled 300 publications for this study. A pilot-tested Google form was used to record information from the publications regarding indicators of reproducibility. Following peer-review, we extracted data from an additional 200 publications in an attempt to reproduce our initial results. The additional 200 publications were selected from the list of initially randomized publications. RESULTS Our initial search returned 295,543 records, from which 300 were randomly selected for analysis. Of these 300 records, 294 met inclusion criteria and 6 did not. Among the empirical publications, 5.6% (11/195, [3.0-8.3]) contained a data availability statement, 0.51% (1/195) provided clear documented raw data, 12.0% (23/191, [8.4-15.7]) provided a materials availability statement, 0% provided analysis scripts, 4.1% (8/195, [1.9-6.3]) provided a pre-registration statement, 2.1% (4/195, [0.4-3.7]) provided a protocol statement, and 3.6% (7/195, [1.5-5.7]) were pre-registered. The validation study of the 5 key indicators of reproducibility-availability of data, materials, protocols, analysis scripts, and pre-registration-resulted in 2 indicators (availability of protocols and analysis scripts) being reproduced, as they fell within the 95% confidence intervals for the proportions from the original sample. However, materials' availability and pre-registration proportions from the validation sample were lower than what was found in the original sample. CONCLUSION Our findings demonstrate key indicators of reproducibility are missing in the field of radiology. Thus, the ability to reproduce studies contained in radiology publications may be problematic and may have potential clinical implications.
Collapse
Affiliation(s)
- Bryan D Wright
- Oklahoma State University Center for Health Sciences, 1111 W 17th St, Tulsa, OK, 74107, USA.
| | - Nam Vo
- Kansas City University of Medicine and Biosciences, Joplin, MO, USA
| | - Johnny Nolan
- Kansas City University of Medicine and Biosciences, Joplin, MO, USA
| | - Austin L Johnson
- Oklahoma State University Center for Health Sciences, 1111 W 17th St, Tulsa, OK, 74107, USA
| | - Tyler Braaten
- Department of Diagnostic and Interventional Imaging, The University of Texas Health Sciences Center at Houston, Houston, TX, USA
| | - Daniel Tritz
- Oklahoma State University Center for Health Sciences, 1111 W 17th St, Tulsa, OK, 74107, USA
| | - Matt Vassar
- Oklahoma State University Center for Health Sciences, 1111 W 17th St, Tulsa, OK, 74107, USA
| |
Collapse
|
5
|
Recht MP, Dewey M, Dreyer K, Langlotz C, Niessen W, Prainsack B, Smith JJ. Integrating artificial intelligence into the clinical practice of radiology: challenges and recommendations. Eur Radiol 2020; 30:3576-3584. [PMID: 32064565 DOI: 10.1007/s00330-020-06672-5] [Citation(s) in RCA: 134] [Impact Index Per Article: 26.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Revised: 12/21/2019] [Accepted: 01/23/2020] [Indexed: 12/31/2022]
Abstract
Artificial intelligence (AI) has the potential to significantly disrupt the way radiology will be practiced in the near future, but several issues need to be resolved before AI can be widely implemented in daily practice. These include the role of the different stakeholders in the development of AI for imaging, the ethical development and use of AI in healthcare, the appropriate validation of each developed AI algorithm, the development of effective data sharing mechanisms, regulatory hurdles for the clearance of AI algorithms, and the development of AI educational resources for both practicing radiologists and radiology trainees. This paper details these issues and presents possible solutions based on discussions held at the 2019 meeting of the International Society for Strategic Studies in Radiology. KEY POINTS: • Radiologists should be aware of the different types of bias commonly encountered in AI studies, and understand their possible effects. • Methods for effective data sharing to train, validate, and test AI algorithms need to be developed. • It is essential for all radiologists to gain an understanding of the basic principles, potentials, and limits of AI.
Collapse
Affiliation(s)
- Michael P Recht
- Department of Radiology, New York University Robert I Grossman School of Medicine, New York, NY, USA.
| | - Marc Dewey
- Charité - Universitätsmedizin Berlin, Humboldt-Universität and Freie Universität zu Berlin, Berlin, Germany
| | - Keith Dreyer
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Curtis Langlotz
- Department of Radiology and Biomedical Informatics, Stanford University, Palo Alto, CA, USA
| | - Wiro Niessen
- Department of Radiology, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
- Faculty of Applied Sciences, Delft University of Technology, Delft, The Netherlands
| | - Barbara Prainsack
- Department of Political Science, University of Vienna, Vienna, Austria
- Department of Global Health & Social Medicine, King's College, London, UK
| | | |
Collapse
|