1
|
Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement From the ACR, CAR, ESR, RANZCR & RSNA. Can Assoc Radiol J 2024; 75:226-244. [PMID: 38251882 DOI: 10.1177/08465371231222229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024] Open
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever‑growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi‑society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.
Collapse
|
2
|
Video Endoscopy as Big Data: Balancing Privacy and Progress in Gastroenterology. Am J Gastroenterol 2024; 119:600-605. [PMID: 37975601 PMCID: PMC10984632 DOI: 10.14309/ajg.0000000000002597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/01/2023] [Accepted: 11/03/2023] [Indexed: 11/19/2023]
|
3
|
Optimizing the Clinical Direction of Artificial Intelligence With Health Policy: A Narrative Review of the Literature. Cureus 2024; 16:e58400. [PMID: 38756258 PMCID: PMC11098056 DOI: 10.7759/cureus.58400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/16/2024] [Indexed: 05/18/2024] Open
Abstract
Artificial intelligence (AI) has the ability to completely transform the healthcare industry by enhancing diagnosis, treatment, and resource allocation. To ensure patient safety and equitable access to healthcare, it also presents ethical and practical issues that need to be carefully addressed. Its integration into healthcare is a crucial topic. To realize its full potential, however, the ethical issues around data privacy, prejudice, and transparency, as well as the practical difficulties posed by workforce adaptability and statutory frameworks, must be addressed. While there is growing knowledge about the advantages of AI in healthcare, there is a significant lack of knowledge about the moral and practical issues that come with its application, particularly in the setting of emergency and critical care. The majority of current research tends to concentrate on the benefits of AI, but thorough studies that investigate the potential disadvantages and ethical issues are scarce. The purpose of our article is to identify and examine the ethical and practical difficulties that arise when implementing AI in emergency medicine and critical care, to provide solutions to these issues, and to give suggestions to healthcare professionals and policymakers. In order to responsibly and successfully integrate AI in these important healthcare domains, policymakers and healthcare professionals must collaborate to create strong regulatory frameworks, safeguard data privacy, remove prejudice, and give healthcare workers the necessary training.
Collapse
|
4
|
The unintended consequences of artificial intelligence in paediatric radiology. Pediatr Radiol 2024; 54:585-593. [PMID: 37665368 DOI: 10.1007/s00247-023-05746-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 08/07/2023] [Accepted: 08/08/2023] [Indexed: 09/05/2023]
Abstract
Over the past decade, there has been a dramatic rise in the interest relating to the application of artificial intelligence (AI) in radiology. Originally only 'narrow' AI tasks were possible; however, with increasing availability of data, teamed with ease of access to powerful computer processing capabilities, we are becoming more able to generate complex and nuanced prediction models and elaborate solutions for healthcare. Nevertheless, these AI models are not without their failings, and sometimes the intended use for these solutions may not lead to predictable impacts for patients, society or those working within the healthcare profession. In this article, we provide an overview of the latest opinions regarding AI ethics, bias, limitations, challenges and considerations that we should all contemplate in this exciting and expanding field, with a special attention to how this applies to the unique aspects of a paediatric population. By embracing AI technology and fostering a multidisciplinary approach, it is hoped that we can harness the power AI brings whilst minimising harm and ensuring a beneficial impact on radiology practice.
Collapse
|
5
|
Brain tumor segmentation using synthetic MR images - A comparison of GANs and diffusion models. Sci Data 2024; 11:259. [PMID: 38424097 PMCID: PMC10904731 DOI: 10.1038/s41597-024-03073-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 02/15/2024] [Indexed: 03/02/2024] Open
Abstract
Large annotated datasets are required for training deep learning models, but in medical imaging data sharing is often complicated due to ethics, anonymization and data protection legislation. Generative AI models, such as generative adversarial networks (GANs) and diffusion models, can today produce very realistic synthetic images, and can potentially facilitate data sharing. However, in order to share synthetic medical images it must first be demonstrated that they can be used for training different networks with acceptable performance. Here, we therefore comprehensively evaluate four GANs (progressive GAN, StyleGAN 1-3) and a diffusion model for the task of brain tumor segmentation (using two segmentation networks, U-Net and a Swin transformer). Our results show that segmentation networks trained on synthetic images reach Dice scores that are 80%-90% of Dice scores when training with real images, but that memorization of the training images can be a problem for diffusion models if the original dataset is too small. Our conclusion is that sharing synthetic medical images is a viable option to sharing real images, but that further work is required. The trained generative models and the generated synthetic images are shared on AIDA data hub.
Collapse
|
6
|
Developing, purchasing, implementing and monitoring AI tools in radiology: Practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR & RSNA. J Med Imaging Radiat Oncol 2024; 68:7-26. [PMID: 38259140 DOI: 10.1111/1754-9485.13612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 11/23/2023] [Indexed: 01/24/2024]
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.
Collapse
|
7
|
Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement From the ACR, CAR, ESR, RANZCR & RSNA. J Am Coll Radiol 2024:S1546-1440(23)01020-7. [PMID: 38276923 DOI: 10.1016/j.jacr.2023.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
Artificial intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. KEY POINTS.
Collapse
|
8
|
Developing, purchasing, implementing and monitoring AI tools in radiology: practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR & RSNA. Insights Imaging 2024; 15:16. [PMID: 38246898 PMCID: PMC10800328 DOI: 10.1186/s13244-023-01541-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024] Open
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones.This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.Key points • The incorporation of artificial intelligence (AI) in radiological practice demands increased monitoring of its utility and safety.• Cooperation between developers, clinicians, and regulators will allow all involved to address ethical issues and monitor AI performance.• AI can fulfil its promise to advance patient well-being if all steps from development to integration in healthcare are rigorously evaluated.
Collapse
|
9
|
Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement from the ACR, CAR, ESR, RANZCR and RSNA. Radiol Artif Intell 2024; 6:e230513. [PMID: 38251899 PMCID: PMC10831521 DOI: 10.1148/ryai.230513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024]
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. This article is simultaneously published in Insights into Imaging (DOI 10.1186/s13244-023-01541-3), Journal of Medical Imaging and Radiation Oncology (DOI 10.1111/1754-9485.13612), Canadian Association of Radiologists Journal (DOI 10.1177/08465371231222229), Journal of the American College of Radiology (DOI 10.1016/j.jacr.2023.12.005), and Radiology: Artificial Intelligence (DOI 10.1148/ryai.230513). Keywords: Artificial Intelligence, Radiology, Automation, Machine Learning Published under a CC BY 4.0 license. ©The Author(s) 2024. Editor's Note: The RSNA Board of Directors has endorsed this article. It has not undergone review or editing by this journal.
Collapse
|
10
|
Specific measures for data-intensive health research without consent: a systematic review of soft law instruments and academic literature. Eur J Hum Genet 2024; 32:21-30. [PMID: 37848609 PMCID: PMC10772063 DOI: 10.1038/s41431-023-01471-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 08/24/2023] [Accepted: 09/18/2023] [Indexed: 10/19/2023] Open
Abstract
It is a common misunderstanding of current European data protection law that when consent is not being used as lawful basis, the processing of personal data is prohibited. Article 9(2)(j) of the European General Data Protection Regulation (GDPR) permits Member States to establish a legal basis in national law that allows for the processing of personal data for scientific research purposes without consent. However, the European legislator has formulated this "research exemption" as an opening clause, rendering the GDPR not specific as to what measures exactly are required to comply with the research exemption. This may have significant implications for both the protection of personal data and the advancement of data-intensive health research. We performed a systematic review of relevant soft law instruments and academic literature to identify what measures are mentioned in those documents. Our analysis resulted in the identification of four overarching themes of suggested measures: organizational measures; technical measures; oversight and review mechanisms; and public engagement and participation. Some of the suggested measures do not substantially contribute to the clarification of the GDPR's "suitable and specific measures" requirement because they remain vague or broad in nature and encompass all types of data processing. However, the themes oversight and review mechanisms and public engagement and participation provide valuable insights which can be put to practice. Nevertheless, further clarification of the measures and safeguards that should be installed when invoking the research exemption remains necessary.
Collapse
|
11
|
Organizational Factors in Clinical Data Sharing for Artificial Intelligence in Health Care. JAMA Netw Open 2023; 6:e2348422. [PMID: 38113040 PMCID: PMC10731479 DOI: 10.1001/jamanetworkopen.2023.48422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 11/03/2023] [Indexed: 12/21/2023] Open
Abstract
Importance Limited sharing of data sets that accurately represent disease and patient diversity limits the generalizability of artificial intelligence (AI) algorithms in health care. Objective To explore the factors associated with organizational motivation to share health data for AI development. Design, Setting, and Participants This qualitative study investigated organizational readiness for sharing health data across the academic, governmental, nonprofit, and private sectors. Using a multiple case studies approach, 27 semistructured interviews were conducted with leaders in data-sharing roles from August 29, 2022, to January 9, 2023. The interviews were conducted in the English language using a video conferencing platform. Using a purposive and nonprobabilistic sampling strategy, 78 individuals across 52 unique organizations were identified. Of these, 35 participants were enrolled. Participant recruitment concluded after 27 interviews, as theoretical saturation was reached and no additional themes emerged. Main Outcome and Measure Concepts defining organizational readiness for data sharing and the association between data-sharing factors and organizational behavior were mapped through iterative qualitative analysis to establish a framework defining organizational readiness for sharing clinical data for AI development. Results Interviews included 27 leaders from 18 organizations (academia: 10, government: 7, nonprofit: 8, and private: 2). Organizational readiness for data sharing centered around 2 main constructs: motivation and capabilities. Motivation related to the alignment of an organization's values with data-sharing priorities and was associated with its engagement in data-sharing efforts. However, organizational motivation could be modulated by extrinsic incentives for financial or reputational gains. Organizational capabilities comprised infrastructure, people, expertise, and access to data. Cross-sector collaboration was a key strategy to mitigate barriers to access health data. Conclusions and Relevance This qualitative study identified sector-specific factors that may affect the data-sharing behaviors of health organizations. External incentives may bolster cross-sector collaborations by helping overcome barriers to accessing health data for AI development. The findings suggest that tailored incentives may boost organizational motivation and facilitate sustainable flow of health data for AI development.
Collapse
|
12
|
Ethical Considerations for Artificial Intelligence in Medical Imaging: Data Collection, Development, and Evaluation. J Nucl Med 2023; 64:1848-1854. [PMID: 37827839 PMCID: PMC10690124 DOI: 10.2967/jnumed.123.266080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 09/12/2023] [Indexed: 10/14/2023] Open
Abstract
The development of artificial intelligence (AI) within nuclear imaging involves several ethically fraught components at different stages of the machine learning pipeline, including during data collection, model training and validation, and clinical use. Drawing on the traditional principles of medical and research ethics, and highlighting the need to ensure health justice, the AI task force of the Society of Nuclear Medicine and Molecular Imaging has identified 4 major ethical risks: privacy of data subjects, data quality and model efficacy, fairness toward marginalized populations, and transparency of clinical performance. We provide preliminary recommendations to developers of AI-driven medical devices for mitigating the impact of these risks on patients and populations.
Collapse
|
13
|
Cybersecurity considerations for radiology departments involved with artificial intelligence. Eur Radiol 2023; 33:8833-8841. [PMID: 37418025 PMCID: PMC10667413 DOI: 10.1007/s00330-023-09860-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 03/31/2023] [Accepted: 05/22/2023] [Indexed: 07/08/2023]
Abstract
Radiology artificial intelligence (AI) projects involve the integration of integrating numerous medical devices, wireless technologies, data warehouses, and social networks. While cybersecurity threats are not new to healthcare, their prevalence has increased with the rise of AI research for applications in radiology, making them one of the major healthcare risks of 2021. Radiologists have extensive experience with the interpretation of medical imaging data but radiologists may not have the required level of awareness or training related to AI-specific cybersecurity concerns. Healthcare providers and device manufacturers can learn from other industry sector industries that have already taken steps to improve their cybersecurity systems. This review aims to introduce cybersecurity concepts as it relates to medical imaging and to provide background information on general and healthcare-specific cybersecurity challenges. We discuss approaches to enhancing the level and effectiveness of security through detection and prevention techniques, as well as ways that technology can improve security while mitigating risks. We first review general cybersecurity concepts and regulatory issues before examining these topics in the context of radiology AI, with a specific focus on data, training, data, training, implementation, and auditability. Finally, we suggest potential risk mitigation strategies. By reading this review, healthcare providers, researchers, and device developers can gain a better understanding of the potential risks associated with radiology AI projects, as well as strategies to improve cybersecurity and reduce potential associated risks. CLINICAL RELEVANCE STATEMENT: This review can aid radiologists' and related professionals' understanding of the potential cybersecurity risks associated with radiology AI projects, as well as strategies to improve security. KEY POINTS: • Embarking on a radiology artificial intelligence (AI) project is complex and not without risk especially as cybersecurity threats have certainly become more abundant in the healthcare industry. • Fortunately healthcare providers and device manufacturers have the advantage of being able to take inspiration from other industry sectors who are leading the way in the field. • Herein we provide an introduction to cybersecurity as it pertains to radiology, a background to both general and healthcare-specific cybersecurity challenges; we outline general approaches to improving security through both detection and preventative techniques, and instances where technology can increase security while mitigating risks.
Collapse
|
14
|
Perceptions of Data Set Experts on Important Characteristics of Health Data Sets Ready for Machine Learning: A Qualitative Study. JAMA Netw Open 2023; 6:e2345892. [PMID: 38039004 PMCID: PMC10692863 DOI: 10.1001/jamanetworkopen.2023.45892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 10/20/2023] [Indexed: 12/02/2023] Open
Abstract
Importance The lack of data quality frameworks to guide the development of artificial intelligence (AI)-ready data sets limits their usefulness for machine learning (ML) research in health care and hinders the diagnostic excellence of developed clinical AI applications for patient care. Objective To discern what constitutes high-quality and useful data sets for health and biomedical ML research purposes according to subject matter experts. Design, Setting, and Participants This qualitative study interviewed data set experts, particularly those who are creators and ML researchers. Semistructured interviews were conducted in English and remotely through a secure video conferencing platform between August 23, 2022, and January 5, 2023. A total of 93 experts were invited to participate. Twenty experts were enrolled and interviewed. Using purposive sampling, experts were affiliated with a diverse representation of 16 health data sets/databases across organizational sectors. Content analysis was used to evaluate survey information and thematic analysis was used to analyze interview data. Main Outcomes and Measures Data set experts' perceptions on what makes data sets AI ready. Results Participants included 20 data set experts (11 [55%] men; mean [SD] age, 42 [11] years), of whom all were health data set creators, and 18 of the 20 were also ML researchers. Themes (3 main and 11 subthemes) were identified and integrated into an AI-readiness framework to show their association within the health data ecosystem. Participants partially determined the AI readiness of data sets using priority appraisal elements of accuracy, completeness, consistency, and fitness. Ethical acquisition and societal impact emerged as appraisal considerations in that participant samples have not been described to date in prior data quality frameworks. Factors that drive creation of high-quality health data sets and mitigate risks associated with data reuse in ML research were also relevant to AI readiness. The state of data availability, data quality standards, documentation, team science, and incentivization were associated with elements of AI readiness and the overall perception of data set usefulness. Conclusions and Relevance In this qualitative study of data set experts, participants contributed to the development of a grounded framework for AI data set quality. Data set AI readiness required the concerted appraisal of many elements and the balancing of transparency and ethical reflection against pragmatic constraints. The movement toward more reliable, relevant, and ethical AI and ML applications for patient care will inevitably require strategic updates to data set creation practices.
Collapse
|
15
|
Ethical Considerations and Fairness in the Use of Artificial Intelligence for Neuroradiology. AJNR Am J Neuroradiol 2023; 44:1242-1248. [PMID: 37652578 PMCID: PMC10631523 DOI: 10.3174/ajnr.a7963] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 07/07/2023] [Indexed: 09/02/2023]
Abstract
In this review, concepts of algorithmic bias and fairness are defined qualitatively and mathematically. Illustrative examples are given of what can go wrong when unintended bias or unfairness in algorithmic development occurs. The importance of explainability, accountability, and transparency with respect to artificial intelligence algorithm development and clinical deployment is discussed. These are grounded in the concept of "primum no nocere" (first, do no harm). Steps to mitigate unfairness and bias in task definition, data collection, model definition, training, testing, deployment, and feedback are provided. Discussions on the implementation of fairness criteria that maximize benefit and minimize unfairness and harm to neuroradiology patients will be provided, including suggestions for neuroradiologists to consider as artificial intelligence algorithms gain acceptance into neuroradiology practice and become incorporated into routine clinical workflow.
Collapse
|
16
|
Artificial Intelligence and liver: Opportunities and barriers. Dig Liver Dis 2023; 55:1455-1461. [PMID: 37718227 DOI: 10.1016/j.dld.2023.08.048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 08/14/2023] [Accepted: 08/17/2023] [Indexed: 09/19/2023]
Abstract
Artificial Intelligence (AI) has recently been shown as an excellent tool for the study of the liver; however, many obstacles still have to be overcome for the digitalization of real-world hepatology. The authors present an overview of the current state of the art on the use of innovative technologies in different areas (big data, translational hepatology, imaging, and transplant setting). In clinical practice, physicians must integrate a vast array of data modalities (medical history, clinical data, laboratory tests, imaging, and pathology slides) to achieve a diagnostic or therapeutic decision. Unfortunately, machine learning and deep learning are still far from really supporting clinicians in real life. In fact, the accuracy of any technological support has no value in medicine without the support of clinicians. To make better use of new technologies, it is essential to improve clinicians' knowledge about them. To this end, the authors propose that collaborative networks for multidisciplinary approaches will improve the rapid implementation of AI systems for developing disease-customized AI-powered clinical decision support tools. The authors also discuss ethical, educational, and legal challenges that must be overcome to build robust bridges and deploy potentially effective AI in real-world clinical settings.
Collapse
|
17
|
The Future of AI and Informatics in Radiology: 10 Predictions. Radiology 2023; 309:e231114. [PMID: 37874234 PMCID: PMC10623186 DOI: 10.1148/radiol.231114] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 05/16/2023] [Accepted: 05/22/2023] [Indexed: 10/25/2023]
|
18
|
Ethical Considerations for Artificial Intelligence in Medical Imaging: Deployment and Governance. J Nucl Med 2023; 64:1509-1515. [PMID: 37620051 DOI: 10.2967/jnumed.123.266110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 07/11/2023] [Indexed: 08/26/2023] Open
Abstract
The deployment of artificial intelligence (AI) has the potential to make nuclear medicine and medical imaging faster, cheaper, and both more effective and more accessible. This is possible, however, only if clinicians and patients feel that these AI medical devices (AIMDs) are trustworthy. Highlighting the need to ensure health justice by fairly distributing benefits and burdens while respecting individual patients' rights, the AI Task Force of the Society of Nuclear Medicine and Molecular Imaging has identified 4 major ethical risks that arise during the deployment of AIMD: autonomy of patients and clinicians, transparency of clinical performance and limitations, fairness toward marginalized populations, and accountability of physicians and developers. We provide preliminary recommendations for governing these ethical risks to realize the promise of AIMD for patients and populations.
Collapse
|
19
|
Implications of Pediatric Artificial Intelligence Challenges for Artificial Intelligence Education and Curriculum Development. J Am Coll Radiol 2023; 20:724-729. [PMID: 37352995 DOI: 10.1016/j.jacr.2023.04.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Revised: 03/22/2023] [Accepted: 04/06/2023] [Indexed: 06/25/2023]
Abstract
Several radiology artificial intelligence (AI) courses are offered by a variety of institutions and educators. The major radiology societies have developed AI curricula focused on basic AI principles and practices. However, a specific AI curriculum focused on pediatric radiology is needed to offer targeted education material on AI model development and performance evaluation. There are inherent differences between pediatric and adult practice patterns, which may hinder the application of adult AI models in pediatric cohorts. Such differences include the different imaging modality utilization, imaging acquisition parameters, lower radiation doses, the rapid growth of children and changes in their body composition, and the presence of unique pathologies and diseases, which differ in prevalence from adults. Thus, to enhance radiologists' knowledge of the applications of AI models in pediatric patients, curricula should be structured keeping in mind the unique pediatric setting and its challenges, along with methods to overcome these challenges, and pediatric-specific data governance and ethical considerations. In this report, the authors highlight the salient aspects of pediatric radiology that are necessary for AI education in the pediatric setting, including the challenges for research investigation and clinical implementation.
Collapse
|
20
|
Facial Anonymization and Privacy Concerns in Total-Body PET/CT. J Nucl Med 2023; 64:1304-1309. [PMID: 37268426 PMCID: PMC10394314 DOI: 10.2967/jnumed.122.265280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 03/30/2023] [Indexed: 06/04/2023] Open
Abstract
Total-body PET/CT images can be rendered to produce images of a subject's face and body. In response to privacy and identifiability concerns when sharing data, we have developed and validated a workflow that obscures (defaces) a subject's face in 3-dimensional volumetric data. Methods: To validate our method, we measured facial identifiability before and after defacing images from 30 healthy subjects who were imaged with both [18F]FDG PET and CT at either 3 or 6 time points. Briefly, facial embeddings were calculated using Google's FaceNet, and an analysis of clustering was used to estimate identifiability. Results: Faces rendered from CT images were correctly matched to CT scans at other time points at a rate of 93%, which decreased to 6% after defacing. Faces rendered from PET images were correctly matched to PET images at other time points at a maximum rate of 64% and to CT images at a maximum rate of 50%, both of which decreased to 7% after defacing. We further demonstrated that defaced CT images can be used for attenuation correction during PET reconstruction, introducing a maximum bias of -3.3% in regions of the cerebral cortex nearest the face. Conclusion: We believe that the proposed method provides a baseline of anonymity and discretion when sharing image data online or between institutions and will help to facilitate collaboration and future regulatory compliance.
Collapse
|
21
|
Federated benchmarking of medical artificial intelligence with MedPerf. NAT MACH INTELL 2023; 5:799-810. [PMID: 38706981 PMCID: PMC11068064 DOI: 10.1038/s42256-023-00652-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 04/06/2023] [Indexed: 05/07/2024]
Abstract
Medical artificial intelligence (AI) has tremendous potential to advance healthcare by supporting and contributing to the evidence-based practice of medicine, personalizing patient treatment, reducing costs, and improving both healthcare provider and patient experience. Unlocking this potential requires systematic, quantitative evaluation of the performance of medical AI models on large-scale, heterogeneous data capturing diverse patient populations. Here, to meet this need, we introduce MedPerf, an open platform for benchmarking AI models in the medical domain. MedPerf focuses on enabling federated evaluation of AI models, by securely distributing them to different facilities, such as healthcare organizations. This process of bringing the model to the data empowers each facility to assess and verify the performance of AI models in an efficient and human-supervised process, while prioritizing privacy. We describe the current challenges healthcare and AI communities face, the need for an open platform, the design philosophy of MedPerf, its current implementation status and real-world deployment, our roadmap and, importantly, the use of MedPerf with multiple international institutions within cloud-based technology and on-premises scenarios. Finally, we welcome new contributions by researchers and organizations to further strengthen MedPerf as an open benchmarking platform.
Collapse
|
22
|
Ethical Considerations for Artificial Intelligence in Interventional Radiology: Balancing Innovation and Patient Care. Semin Intervent Radiol 2023; 40:323-326. [PMID: 37484438 PMCID: PMC10359128 DOI: 10.1055/s-0043-1769905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2023]
|
23
|
Artificial intelligence in neuroradiology: a scoping review of some ethical challenges. FRONTIERS IN RADIOLOGY 2023; 3:1149461. [PMID: 37492387 PMCID: PMC10365008 DOI: 10.3389/fradi.2023.1149461] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 04/27/2023] [Indexed: 07/27/2023]
Abstract
Artificial intelligence (AI) has great potential to increase accuracy and efficiency in many aspects of neuroradiology. It provides substantial opportunities for insights into brain pathophysiology, developing models to determine treatment decisions, and improving current prognostication as well as diagnostic algorithms. Concurrently, the autonomous use of AI models introduces ethical challenges regarding the scope of informed consent, risks associated with data privacy and protection, potential database biases, as well as responsibility and liability that might potentially arise. In this manuscript, we will first provide a brief overview of AI methods used in neuroradiology and segue into key methodological and ethical challenges. Specifically, we discuss the ethical principles affected by AI approaches to human neuroscience and provisions that might be imposed in this domain to ensure that the benefits of AI frameworks remain in alignment with ethics in research and healthcare in the future.
Collapse
|
24
|
Preparing for the Artificial Intelligence Revolution in Nuclear Cardiology. Nucl Med Mol Imaging 2023; 57:51-60. [PMID: 36998588 PMCID: PMC10043081 DOI: 10.1007/s13139-021-00733-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 12/18/2021] [Accepted: 12/19/2021] [Indexed: 10/19/2022] Open
Abstract
A major opportunity in nuclear cardiology is the many significant artificial intelligence (AI) applications that have recently been reported. These developments include using deep learning (DL) for reducing the needed injected dose and acquisition time in perfusion acquisitions also due to DL improvements in image reconstruction and filtering, SPECT attenuation correction using DL without need for transmission images, DL and machine learning (ML) use for feature extraction to define myocardial left ventricular (LV) borders for functional measurements and improved detection of the LV valve plane and AI, ML, and DL implementations for MPI diagnosis, prognosis, and structured reporting. Although some have, most of these applications have yet to make it to widespread commercial distribution due to the recency of their developments, most reported in 2020. We must be prepared both technically and socio-economically to fully benefit from these and a tsunami of other AI applications that are coming.
Collapse
|
25
|
Collaborative Privacy-preserving Approaches for Distributed Deep Learning Using Multi-Institutional Data. Radiographics 2023; 43:e220107. [PMID: 36862082 PMCID: PMC10091220 DOI: 10.1148/rg.220107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 08/04/2022] [Accepted: 08/09/2022] [Indexed: 03/03/2023]
Abstract
Deep learning (DL) algorithms have shown remarkable potential in automating various tasks in medical imaging and radiologic reporting. However, models trained on low quantities of data or only using data from a single institution often are not generalizable to other institutions, which may have different patient demographics or data acquisition characteristics. Therefore, training DL algorithms using data from multiple institutions is crucial to improving the robustness and generalizability of clinically useful DL models. In the context of medical data, simply pooling data from each institution to a central location to train a model poses several issues such as increased risk to patient privacy, increased costs for data storage and transfer, and regulatory challenges. These challenges of centrally hosting data have motivated the development of distributed machine learning techniques and frameworks for collaborative learning that facilitate the training of DL models without the need to explicitly share private medical data. The authors describe several popular methods for collaborative training and review the main considerations for deploying these models. They also highlight publicly available software frameworks for federated learning and showcase several real-world examples of collaborative learning. The authors conclude by discussing some key challenges and future research directions for distributed DL. They aim to introduce clinicians to the benefits, limitations, and risks of using distributed DL for the development of medical artificial intelligence algorithms. ©RSNA, 2023 Quiz questions for this article are available in the supplemental material.
Collapse
|
26
|
Informing a position statement on the use of artificial intelligence in dermatology in Australia. Australas J Dermatol 2023; 64:e11-e20. [PMID: 36380357 DOI: 10.1111/ajd.13946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 10/06/2022] [Accepted: 10/17/2022] [Indexed: 11/17/2022]
Abstract
Artificial Intelligence (AI) is the ability for computers to simulate human intelligence. In dermatology, there is substantial interest in using AI to identify skin lesions from images. Due to increasing research and interest in the use of AI, the Australasian College of Dermatologists has developed a position statement to inform its members of appropriate use of AI. This article presents the ACD Position Statement on the use of AI in dermatology, and provides explanatory information that was used to inform the development of this statement.
Collapse
|
27
|
AAPM task group report 273: Recommendations on best practices for AI and machine learning for computer-aided diagnosis in medical imaging. Med Phys 2023; 50:e1-e24. [PMID: 36565447 DOI: 10.1002/mp.16188] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 11/13/2022] [Accepted: 11/22/2022] [Indexed: 12/25/2022] Open
Abstract
Rapid advances in artificial intelligence (AI) and machine learning, and specifically in deep learning (DL) techniques, have enabled broad application of these methods in health care. The promise of the DL approach has spurred further interest in computer-aided diagnosis (CAD) development and applications using both "traditional" machine learning methods and newer DL-based methods. We use the term CAD-AI to refer to this expanded clinical decision support environment that uses traditional and DL-based AI methods. Numerous studies have been published to date on the development of machine learning tools for computer-aided, or AI-assisted, clinical tasks. However, most of these machine learning models are not ready for clinical deployment. It is of paramount importance to ensure that a clinical decision support tool undergoes proper training and rigorous validation of its generalizability and robustness before adoption for patient care in the clinic. To address these important issues, the American Association of Physicists in Medicine (AAPM) Computer-Aided Image Analysis Subcommittee (CADSC) is charged, in part, to develop recommendations on practices and standards for the development and performance assessment of computer-aided decision support systems. The committee has previously published two opinion papers on the evaluation of CAD systems and issues associated with user training and quality assurance of these systems in the clinic. With machine learning techniques continuing to evolve and CAD applications expanding to new stages of the patient care process, the current task group report considers the broader issues common to the development of most, if not all, CAD-AI applications and their translation from the bench to the clinic. The goal is to bring attention to the proper training and validation of machine learning algorithms that may improve their generalizability and reliability and accelerate the adoption of CAD-AI systems for clinical decision support.
Collapse
|
28
|
Ethics and governance of trustworthy medical artificial intelligence. BMC Med Inform Decis Mak 2023; 23:7. [PMID: 36639799 PMCID: PMC9840286 DOI: 10.1186/s12911-023-02103-9] [Citation(s) in RCA: 32] [Impact Index Per Article: 32.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 01/09/2023] [Indexed: 01/14/2023] Open
Abstract
BACKGROUND The growing application of artificial intelligence (AI) in healthcare has brought technological breakthroughs to traditional diagnosis and treatment, but it is accompanied by many risks and challenges. These adverse effects are also seen as ethical issues and affect trustworthiness in medical AI and need to be managed through identification, prognosis and monitoring. METHODS We adopted a multidisciplinary approach and summarized five subjects that influence the trustworthiness of medical AI: data quality, algorithmic bias, opacity, safety and security, and responsibility attribution, and discussed these factors from the perspectives of technology, law, and healthcare stakeholders and institutions. The ethical framework of ethical values-ethical principles-ethical norms is used to propose corresponding ethical governance countermeasures for trustworthy medical AI from the ethical, legal, and regulatory aspects. RESULTS Medical data are primarily unstructured, lacking uniform and standardized annotation, and data quality will directly affect the quality of medical AI algorithm models. Algorithmic bias can affect AI clinical predictions and exacerbate health disparities. The opacity of algorithms affects patients' and doctors' trust in medical AI, and algorithmic errors or security vulnerabilities can pose significant risks and harm to patients. The involvement of medical AI in clinical practices may threaten doctors 'and patients' autonomy and dignity. When accidents occur with medical AI, the responsibility attribution is not clear. All these factors affect people's trust in medical AI. CONCLUSIONS In order to make medical AI trustworthy, at the ethical level, the ethical value orientation of promoting human health should first and foremost be considered as the top-level design. At the legal level, current medical AI does not have moral status and humans remain the duty bearers. At the regulatory level, strengthening data quality management, improving algorithm transparency and traceability to reduce algorithm bias, and regulating and reviewing the whole process of the AI industry to control risks are proposed. It is also necessary to encourage multiple parties to discuss and assess AI risks and social impacts, and to strengthen international cooperation and communication.
Collapse
|
29
|
Institutional Strategies to Maintain and Grow Imaging Research During the COVID-19 Pandemic. Acad Radiol 2023; 30:631-639. [PMID: 36764883 PMCID: PMC9816088 DOI: 10.1016/j.acra.2022.12.045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 12/23/2022] [Accepted: 12/24/2022] [Indexed: 01/09/2023]
Abstract
Understanding imaging research experiences, challenges, and strategies for academic radiology departments during and after COVID-19 is critical to prepare for future disruptive events. We summarize key insights and programmatic initiatives at major academic hospitals across the world, based on literature review and meetings of the Radiological Society of North America Vice Chairs of Research (RSNA VCR) group. Through expert discussion and case studies, we provide suggested guidelines to maintain and grow radiology research in the postpandemic era.
Collapse
|
30
|
SAM-X: sorting algorithm for musculoskeletal x-ray radiography. Eur Radiol 2023; 33:1537-1544. [PMID: 36307553 PMCID: PMC9935683 DOI: 10.1007/s00330-022-09184-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 09/14/2022] [Accepted: 09/19/2022] [Indexed: 11/04/2022]
Abstract
OBJECTIVE To develop a two-phased deep learning sorting algorithm for post-X-ray image acquisition in order to facilitate large musculoskeletal image datasets according to their anatomical entity. METHODS In total, 42,608 unstructured and pseudonymized radiographs were retrieved from the PACS of a musculoskeletal tumor center. In phase 1, imaging data were sorted into 1000 clusters by a self-supervised model. A human-in-the-loop radiologist assigned weak, semantic labels to all clusters and clusters with the same label were merged. Three hundred thirty-two non-musculoskeletal clusters were discarded. In phase 2, the initial model was modified by "injecting" the identified labels into the self-supervised model to train a classifier. To provide statistical significance, data split and cross-validation were applied. The hold-out test set consisted of 50% external data. To gain insight into the model's predictions, Grad-CAMs were calculated. RESULTS The self-supervised clustering resulted in a high normalized mutual information of 0.930. The expert radiologist identified 28 musculoskeletal clusters. The modified model achieved a classification accuracy of 96.2% and 96.6% for validation and hold-out test data for predicting the top class, respectively. When considering the top two predicted class labels, an accuracy of 99.7% and 99.6% was accomplished. Grad-CAMs as well as final cluster results underlined the robustness of the proposed method by showing that it focused on similar image regions a human would have considered for categorizing images. CONCLUSION For efficient dataset building, we propose an accurate deep learning sorting algorithm for classifying radiographs according to their anatomical entity in the assessment of musculoskeletal diseases. KEY POINTS • Classification of large radiograph datasets according to their anatomical entity. • Paramount importance of structuring vast amounts of retrospective data for modern deep learning applications. • Optimization of the radiological workflow and increase in efficiency as well as decrease of time-consuming tasks for radiologists through deep learning.
Collapse
|
31
|
A Comprehensive Survey on Federated Learning Techniques for Healthcare Informatics. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:8393990. [PMID: 36909974 PMCID: PMC9995203 DOI: 10.1155/2023/8393990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 04/18/2022] [Accepted: 05/18/2022] [Indexed: 03/06/2023]
Abstract
Healthcare is predominantly regarded as a crucial consideration in promoting the general physical and mental health and well-being of people around the world. The amount of data generated by healthcare systems is enormous, making it challenging to manage. Many machine learning (ML) approaches were implemented to develop dependable and robust solutions to handle the data. ML cannot fully utilize data due to privacy concerns. This primarily happens in the case of medical data. Due to a lack of precise clinical data, the application of ML for the same is challenging and may not yield desired results. Federated learning (FL), which is a recent development in ML where the computation is offloaded to the source of data, appears to be a promising solution to this problem. In this study, we present a detailed survey of applications of FL for healthcare informatics. We initiate a discussion on the need for FL in the healthcare domain, followed by a review of recent review papers. We focus on the fundamentals of FL and the major motivations behind FL for healthcare applications. We then present the applications of FL along with recent state of the art in several verticals of healthcare. Then, lessons learned, open issues, and challenges that are yet to be solved are also highlighted. This is followed by future directions that give directions to the prospective researchers willing to do their research in this domain.
Collapse
|
32
|
Abstract
Numerous arguments have been advanced for broadly sharing de-identified, participant-level clinical trials data, and trial sponsors and journals are increasingly requiring it. However, data sharing in pragmatic clinical trials presents ethical challenges related to the use of waivers or alterations of informed consent for some pragmatic clinical trials and corresponding limitations of informed consent to guide sharing decisions; the potential for data sharing in pragmatic clinical trials to present risks not only for individual patient-subjects, but also for health systems and the clinicians within them; sharing of data from electronic health records instead of data newly collected for research purposes; and researchers' limited capacity to control sensitive data within an electronic health record and potential implications of such limits for meeting obligations inherent to Certificates of Confidentiality. These challenges raise questions about the extent to which traditional research ethics governance structures are capable of guiding decisions about pragmatic clinical trial data sharing. This article identifies and examines these ethical challenges for pragmatic clinical trial data sharing. We suggest several areas for future empirical scholarship, including the need to identify patient and public attitudes regarding pragmatic clinical trial data sharing as well as to assess the demand for pragmatic clinical trial data and the correspondingly likely benefit of such sharing. Further conceptual work is also needed to explore how requirements to respect patient-subjects about whom data are shared in the context of pragmatic clinical trials should be understood, particularly in the absence of informed consent for initial research activities, and the appropriate balance between promoting the generation of socially valuable knowledge and respecting autonomy.
Collapse
|
33
|
Developing medical imaging AI for emerging infectious diseases. Nat Commun 2022; 13:7060. [PMID: 36400764 PMCID: PMC9672573 DOI: 10.1038/s41467-022-34234-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 10/19/2022] [Indexed: 11/19/2022] Open
Abstract
Very few of the COVID-19 ML models were fit for deployment in real-world settings. In this Comment, Huang et al. discuss the main steps required to develop clinically useful models in the context of an emerging infectious disease.
Collapse
|
34
|
Value assessment of artificial intelligence in medical imaging: a scoping review. BMC Med Imaging 2022; 22:187. [PMID: 36316665 PMCID: PMC9620604 DOI: 10.1186/s12880-022-00918-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 10/22/2022] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is seen as one of the major disrupting forces in the future healthcare system. However, the assessment of the value of these new technologies is still unclear, and no agreed international health technology assessment-based guideline exists. This study provides an overview of the available literature in the value assessment of AI in the field of medical imaging. METHODS We performed a systematic scoping review of published studies between January 2016 and September 2020 using 10 databases (Medline, Scopus, ProQuest, Google Scholar, and six related databases of grey literature). Information about the context (country, clinical area, and type of study) and mentioned domains with specific outcomes and items were extracted. An existing domain classification, from a European assessment framework, was used as a point of departure, and extracted data were grouped into domains and content analysis of data was performed covering predetermined themes. RESULTS Seventy-nine studies were included out of 5890 identified articles. An additional seven studies were identified by searching reference lists, and the analysis was performed on 86 included studies. Eleven domains were identified: (1) health problem and current use of technology, (2) technology aspects, (3) safety assessment, (4) clinical effectiveness, (5) economics, (6) ethical analysis, (7) organisational aspects, (8) patients and social aspects, (9) legal aspects, (10) development of AI algorithm, performance metrics and validation, and (11) other aspects. The frequency of mentioning a domain varied from 20 to 78% within the included papers. Only 15/86 studies were actual assessments of AI technologies. The majority of data were statements from reviews or papers voicing future needs or challenges of AI research, i.e. not actual outcomes of evaluations. CONCLUSIONS This review regarding value assessment of AI in medical imaging yielded 86 studies including 11 identified domains. The domain classification based on European assessment framework proved useful and current analysis added one new domain. Included studies had a broad range of essential domains about addressing AI technologies highlighting the importance of domains related to legal and ethical aspects.
Collapse
|
35
|
Data governance functions to support responsible data stewardship in pediatric radiology research studies using artificial intelligence. Pediatr Radiol 2022; 52:2111-2119. [PMID: 35790559 DOI: 10.1007/s00247-022-05427-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Revised: 04/13/2022] [Accepted: 06/06/2022] [Indexed: 03/03/2023]
Abstract
The integration of human and machine intelligence promises to profoundly change the practice of medicine. The rapidly increasing adoption of artificial intelligence (AI) solutions highlights its potential to streamline physician work and optimize clinical decision-making, also in the field of pediatric radiology. Large imaging databases are necessary for training, validating and testing these algorithms. To better promote data accessibility in multi-institutional AI-enabled radiologic research, these databases centralize the large volumes of data required to effect accurate models and outcome predictions. However, such undertakings must consider the sensitivity of patient information and therefore utilize requisite data governance measures to safeguard data privacy and security, to recognize and mitigate the effects of bias and to promote ethical use. In this article we define data stewardship and data governance, review their key considerations and applicability to radiologic research in the pediatric context, and consider the associated best practices along with the ramifications of poorly executed data governance. We summarize several adaptable data governance frameworks and describe strategies for their implementation in the form of distributed and centralized approaches to data management.
Collapse
|
36
|
Forging Connections in Latin America to Advance AI in Radiology. Radiol Artif Intell 2022; 4:e220125. [PMID: 36204535 PMCID: PMC9530756 DOI: 10.1148/ryai.220125] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Revised: 08/06/2022] [Accepted: 08/12/2022] [Indexed: 06/16/2023]
Abstract
The 1° Encontro Latino-Americano de IA em Saúde (1st Latin American Meeting on AI in Health) was held during the 2022 Jornada Paulista de Radiologia, the annual radiology meeting in the state of São Paulo. The event was created to foster discussion among Latin American countries about the complexity, challenges, and opportunities in developing and using artificial intelligence (AI) in those countries. Technological improvements in AI have created high expectations in health care. AI is recognized increasingly as a game changer in clinical radiology. To counter the fear that AI would "take over" radiology, the program included activities to educate radiologists. The development of AI in Latin America is in its early days, and although there are some pioneer cases, many regions still lack world-class technological infrastructure and resources. Legislation, regulation, and public policies in data privacy and protection, digital health, and AI are recent advances in many countries. The meeting program was developed with a broad scope, with expertise from different countries, backgrounds, and specialties, with the objective of encompassing all levels of complexity (from basic concepts to advanced techniques), perspectives (clinical, technical, ethical, and business), and specialties (both informatics and data science experts and the usual radiology clinical groups). It was an opportunity to connect with peers from other countries and share lessons learned about AI in health care in different countries and contexts. Keywords: Informatics, Use of AI in Education, Impact of AI on Education, Social Implications © RSNA, 2022.
Collapse
|
37
|
Proceedings from the Society of Interventional Radiology Foundation Research Consensus Panel on Artificial Intelligence in Interventional Radiology: From Code to Bedside. J Vasc Interv Radiol 2022; 33:1113-1120. [PMID: 35871021 DOI: 10.1016/j.jvir.2022.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 06/02/2022] [Accepted: 06/04/2022] [Indexed: 11/24/2022] Open
Abstract
Artificial intelligence (AI)-based technologies are the most rapidly growing field of innovation in healthcare with the promise to achieve substantial improvements in delivery of patient care across all disciplines of medicine. Recent advances in imaging technology along with marked expansion of readily available advanced health information, data offer a unique opportunity for interventional radiology (IR) to reinvent itself as a data-driven specialty. Additionally, the growth of AI-based applications in diagnostic imaging is expected to have downstream effects on all image-guidance modalities. Therefore, the Society of Interventional Radiology Foundation has called upon 13 key opinion leaders in the field of IR to develop research priorities for clinical applications of AI in IR. The objectives of the assembled research consensus panel were to assess the availability and understand the applicability of AI for IR, estimate current needs and clinical use cases, and assemble a list of research priorities for the development of AI in IR. Individual panel members proposed and all participants voted upon consensus statements to rank them according to their overall impact for IR. The results identified the top priorities for the IR research community and provide organizing principles for innovative academic-industrial research collaborations that will leverage both clinical expertise and cutting-edge technology to benefit patient care in IR.
Collapse
|
38
|
Ethics of AI in Radiology: A Review of Ethical and Societal Implications. Front Big Data 2022; 5:850383. [PMID: 35910490 PMCID: PMC9329694 DOI: 10.3389/fdata.2022.850383] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 06/13/2022] [Indexed: 11/13/2022] Open
Abstract
Artificial intelligence (AI) is being applied in medicine to improve healthcare and advance health equity. The application of AI-based technologies in radiology is expected to improve diagnostic performance by increasing accuracy and simplifying personalized decision-making. While this technology has the potential to improve health services, many ethical and societal implications need to be carefully considered to avoid harmful consequences for individuals and groups, especially for the most vulnerable populations. Therefore, several questions are raised, including (1) what types of ethical issues are raised by the use of AI in medicine and biomedical research, and (2) how are these issues being tackled in radiology, especially in the case of breast cancer? To answer these questions, a systematic review of the academic literature was conducted. Searches were performed in five electronic databases to identify peer-reviewed articles published since 2017 on the topic of the ethics of AI in radiology. The review results show that the discourse has mainly addressed expectations and challenges associated with medical AI, and in particular bias and black box issues, and that various guiding principles have been suggested to ensure ethical AI. We found that several ethical and societal implications of AI use remain underexplored, and more attention needs to be paid to addressing potential discriminatory effects and injustices. We conclude with a critical reflection on these issues and the identified gaps in the discourse from a philosophical and STS perspective, underlining the need to integrate a social science perspective in AI developments in radiology in the future.
Collapse
|
39
|
[Ethics for Artificial Intelligence: Focus on the Use of Radiology Images]. JOURNAL OF THE KOREAN SOCIETY OF RADIOLOGY 2022; 83:759-770. [PMID: 36238915 PMCID: PMC9514581 DOI: 10.3348/jksr.2022.0036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Revised: 04/21/2022] [Accepted: 05/02/2022] [Indexed: 06/16/2023]
Abstract
The importance of ethics in research and the use of artificial intelligence (AI) is increasingly recognized not only in the field of healthcare but throughout society. This article intends to provide domestic readers with practical points regarding the ethical issues of using radiological images for AI research, focusing on data security and privacy protection and the right to data. Therefore, this article refers to related domestic laws and government policies. Data security and privacy protection is a key ethical principle for AI, in which proper de-identification of data is crucial. Sharing healthcare data to develop AI in a way that minimizes business interests is another ethical point to be highlighted. The need for data sharing makes the data security and privacy protection even more important as data sharing increases the risk of data breach.
Collapse
|
40
|
Synthetic Medical Images for Robust, Privacy-Preserving Training of Artificial Intelligence. OPHTHALMOLOGY SCIENCE 2022; 2:100126. [PMID: 36249693 PMCID: PMC9560638 DOI: 10.1016/j.xops.2022.100126] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 02/01/2022] [Accepted: 02/07/2022] [Indexed: 02/06/2023]
Abstract
Purpose Developing robust artificial intelligence (AI) models for medical image analysis requires large quantities of diverse, well-chosen data that can prove challenging to collect because of privacy concerns, disease rarity, or diagnostic label quality. Collecting image-based datasets for retinopathy of prematurity (ROP), a potentially blinding disease, suffers from these challenges. Progressively growing generative adversarial networks (PGANs) may help, because they can synthesize highly realistic images that may increase both the size and diversity of medical datasets. Design Diagnostic validation study of convolutional neural networks (CNNs) for plus disease detection, a component of severe ROP, using synthetic data. Participants Five thousand eight hundred forty-two retinal fundus images (RFIs) collected from 963 preterm infants. Methods Retinal vessel maps (RVMs) were segmented from RFIs. PGANs were trained to synthesize RVMs with normal, pre-plus, or plus disease vasculature. Convolutional neural networks were trained, using real or synthetic RVMs, to detect plus disease from 2 real RVM test datasets. Main Outcome Measures Features of real and synthetic RVMs were evaluated using uniform manifold approximation and projection (UMAP). Similarities were evaluated at the dataset and feature level using Fréchet inception distance and Euclidean distance, respectively. CNN performance was assessed via area under the receiver operating characteristic curve (AUC); AUCs were compared via bootstrapping and Delong’s test for correlated receiver operating characteristic curves. Confusion matrices were compared using McNemar’s chi-square test and Cohen’s κ value. Results The CNN trained on synthetic RVMs showed a significantly higher AUC (0.971; P = 0.006 and P = 0.004) and classified plus disease more similarly to a set of 8 international experts (κ = 0.922) than the CNN trained on real RVMs (AUC = 0.934; κ = 0.701). Real and synthetic RVMs overlapped, by plus disease diagnosis, on the UMAP manifold, showing that synthetic images spanned the disease severity spectrum. Fréchet inception distance and Euclidean distances suggested that real and synthetic RVMs were more dissimilar to one another than real RVMs were to one another, further suggesting that synthetic RVMs were distinct from the training data with respect to privacy considerations. Conclusions Synthetic datasets may be useful for training robust medical AI models. Furthermore, PGANs may be able to synthesize realistic data for use without protected health information concerns.
Collapse
|
41
|
A Research Ethics Framework for the Clinical Translation of Healthcare Machine Learning. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:8-22. [PMID: 35048782 DOI: 10.1080/15265161.2021.2013977] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The application of artificial intelligence and machine learning (ML) technologies in healthcare have immense potential to improve the care of patients. While there are some emerging practices surrounding responsible ML as well as regulatory frameworks, the traditional role of research ethics oversight has been relatively unexplored regarding its relevance for clinical ML. In this paper, we provide a comprehensive research ethics framework that can apply to the systematic inquiry of ML research across its development cycle. The pathway consists of three stages: (1) exploratory, hypothesis-generating data access; (2) silent period evaluation; (3) prospective clinical evaluation. We connect each stage to its literature and ethical justification and suggest adaptations to traditional paradigms to suit ML while maintaining ethical rigor and the protection of individuals. This pathway can accommodate a multitude of research designs from observational to controlled trials, and the stages can apply individually to a variety of ML applications.
Collapse
|
42
|
Image Consent and the Development of Image-Based Artificial Intelligence. JAMA Dermatol 2022; 158:589. [PMID: 35416918 DOI: 10.1001/jamadermatol.2022.0689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
|
43
|
Semisupervised Training of a Brain MRI Tumor Detection Model Using Mined Annotations. Radiology 2022; 303:80-89. [PMID: 35040676 PMCID: PMC8962822 DOI: 10.1148/radiol.210817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 10/12/2021] [Accepted: 11/03/2021] [Indexed: 11/11/2022]
Abstract
Background Artificial intelligence (AI) applications for cancer imaging conceptually begin with automated tumor detection, which can provide the foundation for downstream AI tasks. However, supervised training requires many image annotations, and performing dedicated post hoc image labeling is burdensome and costly. Purpose To investigate whether clinically generated image annotations can be data mined from the picture archiving and communication system (PACS), automatically curated, and used for semisupervised training of a brain MRI tumor detection model. Materials and Methods In this retrospective study, the cancer center PACS was mined for brain MRI scans acquired between January 2012 and December 2017 and included all annotated axial T1 postcontrast images. Line annotations were converted to boxes, excluding boxes shorter than 1 cm or longer than 7 cm. The resulting boxes were used for supervised training of object detection models using RetinaNet and Mask region-based convolutional neural network (R-CNN) architectures. The best-performing model trained from the mined data set was used to detect unannotated tumors on training images themselves (self-labeling), automatically correcting many of the missing labels. After self-labeling, new models were trained using this expanded data set. Models were scored for precision, recall, and F1 using a held-out test data set comprising 754 manually labeled images from 100 patients (403 intra-axial and 56 extra-axial enhancing tumors). Model F1 scores were compared using bootstrap resampling. Results The PACS query extracted 31 150 line annotations, yielding 11 880 boxes that met inclusion criteria. This mined data set was used to train models, yielding F1 scores of 0.886 for RetinaNet and 0.908 for Mask R-CNN. Self-labeling added 18 562 training boxes, improving model F1 scores to 0.935 (P < .001) and 0.954 (P < .001), respectively. Conclusion The application of semisupervised learning to mined image annotations significantly improved tumor detection performance, achieving an excellent F1 score of 0.954. This development pipeline can be extended for other imaging modalities, repurposing unused data silos to potentially enable automated tumor detection across radiologic modalities. © RSNA, 2022 Online supplemental material is available for this article.
Collapse
|
44
|
Ensemble-based deep meta learning for medical image segmentation. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-219221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Deep learning methods have led to the state-of-the-art medical applications, such as image classification and segmentation. The data-driven deep learning application can help stakeholders for further collaboration. However, limited labeled data set limits the deep learning algorithms to be generalized for one domain into another. To handle the problem, meta-learning helps to solve this issue especially it can learn from a small set of data. We proposed a meta-learning-based image segmentation model that combines the learning of the state-of-the-art models and then used it to achieve domain adoption and high accuracy. Also, we proposed a prepossessing algorithm to increase the usability of the segment part and remove noise from the new test images. The proposed model can achieve 0.94 precision and 0.92 recall. The ability is to increase 3.3% among the state-of-the-art algorithms.
Collapse
|
45
|
[Artificial intelligence (AI) in radiology? : Do we need as many radiologists in the future?]. Urologe A 2022; 61:392-399. [PMID: 35277758 DOI: 10.1007/s00120-022-01768-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/14/2022] [Indexed: 11/27/2022]
Abstract
We are in the middle of a digital revolution in medicine. This raises the question of whether subjects such as radiology, which is superficially concerned with the interpretation of images, will be particularly changed by this revolution. In particular, it should be discussed whether in the future the completion of initially simpler, then more complex image analysis tasks by computer systems may lead to a reduced need for radiologists. What distinguishes radiology in particular is its key position between advanced technology and medical care. This article discusses that not only radiology but every medical discipline will be affected by innovations due to the digital revolution, and that a redefinition of medical specialties focusing on imaging and visual interpretation makes sense and that the arrival of artificial intelligence (AI) in radiology is to be welcomed in the context of ever larger amounts of image data-to at all be able to handle the increasing amount of image data in the future at the current number of radiologists. In this respect, the balance between research and teaching in comparison to patient care is more difficult to maintain in the academic environment. AI can help improve efficiency and balance in the areas mentioned. With regard to specialist training, information technology topics are expected to be integrated into the radiological curriculum. Radiology acts as a pioneer designing the entry of AI into medicine. It is to be expected that by the time radiologists can be substantially replaced by AI, the replacement of human contributions in other medical and non-medical fields will also be well advanced.
Collapse
|
46
|
The ethical challenges of artificial intelligence-driven digital pathology. JOURNAL OF PATHOLOGY CLINICAL RESEARCH 2022; 8:209-216. [PMID: 35174655 PMCID: PMC8977272 DOI: 10.1002/cjp2.263] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 01/13/2022] [Accepted: 01/27/2022] [Indexed: 12/28/2022]
Abstract
Digital pathology - the digitalisation of clinical histopathology services through the scanning and storage of pathology slides - has opened up new possibilities for health care in recent years, particularly in the opportunities it brings for artificial intelligence (AI)-driven research. Recognising, however, that there is little scholarly debate on the ethics of digital pathology when used for AI research, this paper summarises what it sees as four key ethical issues to consider when deploying AI infrastructures in pathology, namely, privacy, choice, equity, and trust. The themes are inspired from the authors' experience grappling with the challenge of deploying an ethical digital pathology infrastructure to support AI research as part of the National Pathology Imaging Cooperative (NPIC), a collaborative of universities, hospital trusts, and industry partners largely located across the North of England. Though focusing on the UK case, internationally, few pathology departments have gone fully digital, and so the themes developed here offer a heuristic for ethical reflection for other departments currently making a similar transition or planning to do so in the future. We conclude by promoting the need for robust public governance mechanisms in AI-driven digital pathology.
Collapse
|
47
|
SHIFTing artificial intelligence to be responsible in healthcare: A systematic review. Soc Sci Med 2022; 296:114782. [DOI: 10.1016/j.socscimed.2022.114782] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 02/02/2022] [Accepted: 02/03/2022] [Indexed: 12/12/2022]
|
48
|
Abstract
Artificial intelligence (AI) is poised to broadly reshape medicine, potentially improving the experiences of both clinicians and patients. We discuss key findings from a 2-year weekly effort to track and share key developments in medical AI. We cover prospective studies and advances in medical image analysis, which have reduced the gap between research and deployment. We also address several promising avenues for novel medical AI research, including non-image data sources, unconventional problem formulations and human-AI collaboration. Finally, we consider serious technical and ethical challenges in issues spanning from data scarcity to racial bias. As these challenges are addressed, AI's potential may be realized, making healthcare more accurate, efficient and accessible for patients worldwide.
Collapse
|
49
|
Deep Learning in Large and Multi-Site Structural Brain MR Imaging Datasets. Front Neuroinform 2022; 15:805669. [PMID: 35126080 PMCID: PMC8811356 DOI: 10.3389/fninf.2021.805669] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 12/27/2021] [Indexed: 12/22/2022] Open
Abstract
Large, multi-site, heterogeneous brain imaging datasets are increasingly required for the training, validation, and testing of advanced deep learning (DL)-based automated tools, including structural magnetic resonance (MR) image-based diagnostic and treatment monitoring approaches. When assembling a number of smaller datasets to form a larger dataset, understanding the underlying variability between different acquisition and processing protocols across the aggregated dataset (termed “batch effects”) is critical. The presence of variation in the training dataset is important as it more closely reflects the true underlying data distribution and, thus, may enhance the overall generalizability of the tool. However, the impact of batch effects must be carefully evaluated in order to avoid undesirable effects that, for example, may reduce performance measures. Batch effects can result from many sources, including differences in acquisition equipment, imaging technique and parameters, as well as applied processing methodologies. Their impact, both beneficial and adversarial, must be considered when developing tools to ensure that their outputs are related to the proposed clinical or research question (i.e., actual disease-related or pathological changes) and are not simply due to the peculiarities of underlying batch effects in the aggregated dataset. We reviewed applications of DL in structural brain MR imaging that aggregated images from neuroimaging datasets, typically acquired at multiple sites. We examined datasets containing both healthy control participants and patients that were acquired using varying acquisition protocols. First, we discussed issues around Data Access and enumerated the key characteristics of some commonly used publicly available brain datasets. Then we reviewed methods for correcting batch effects by exploring the two main classes of approaches: Data Harmonization that uses data standardization, quality control protocols or other similar algorithms and procedures to explicitly understand and minimize unwanted batch effects; and Domain Adaptation that develops DL tools that implicitly handle the batch effects by using approaches to achieve reliable and robust results. In this narrative review, we highlighted the advantages and disadvantages of both classes of DL approaches, and described key challenges to be addressed in future studies.
Collapse
|
50
|
Can we trust trust-based data governance models? DATA & POLICY 2022. [DOI: 10.1017/dap.2022.36] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022] Open
Abstract
Abstract
Fiduciary agents and trust-based institutions are increasingly proposed and considered in legal, regulatory, and ethical discourse as an alternative or addition to a control-based model of data management. Instead of leaving it up to the citizen to decide what to do with her data and to ensure that her best interests are met, an independent person or organization will act on her behalf, potentially also taking into account the general interest. By ensuring that these interests are protected, the hope is that citizens’ willingness to share data will increase, thereby allowing for more data-driven projects. Thus, trust-based models are presented as a win–win scenario. It is clear, however, that there are also apparent dangers entailed with trust-based approaches. Especially one model, that of data trusts, may have far-reaching consequences.
Collapse
|