1
|
Edalati S, Vasan V, Cheng CP, Patel Z, Govindaraj S, Iloreta AM. Can GPT-4 revolutionize otolaryngology? Navigating opportunities and ethical considerations. Am J Otolaryngol 2024; 45:104303. [PMID: 38678799 DOI: 10.1016/j.amjoto.2024.104303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Accepted: 04/14/2024] [Indexed: 05/01/2024]
Abstract
Otolaryngologists can enhance workflow efficiency, provide better patient care, and advance medical research and education by integrating artificial intelligence (AI) into their practices. GPT-4 technology is a revolutionary and contemporary example of AI that may apply to otolaryngology. The knowledge of otolaryngologists should be supplemented, not replaced when using GPT-4 to make critical medical decisions and provide individualized patient care. In our thorough examination, we explore the potential uses of the groundbreaking GPT-4 technology in the field of otolaryngology, covering aspects such as potential outcomes and technical boundaries. Additionally, we delve into the intricate and intellectually challenging dilemmas that emerge when incorporating GPT-4 into otolaryngology, considering the ethical considerations inherent in its implementation. Our stance is that GPT-4 has the potential to be very helpful. Its capabilities, which include aid in clinical decision-making, patient care, and administrative job automation, present exciting possibilities for enhancing patient outcomes, boosting the efficiency of healthcare delivery, and enhancing patient experiences. Even though there are still certain obstacles and limitations, the progress made so far shows that GPT-4 can be a valuable tool for modern medicine. GPT-4 may play a more significant role in clinical practice as technology develops, helping medical professionals deliver high-quality care tailored to every patient's unique needs.
Collapse
Affiliation(s)
- Shaun Edalati
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| | - Vikram Vasan
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Christopher P Cheng
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Zara Patel
- Department of Otolaryngology-Head & Neck Surgery, Stanford University School of Medicine, Stanford, CA, USA
| | - Satish Govindaraj
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Alfred Marc Iloreta
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
2
|
Underdahl L, Ditri M, Duthely LM. Physician Burnout: Evidence-Based Roadmaps to Prioritizing and Supporting Personal Wellbeing. J Healthc Leadersh 2024; 16:15-27. [PMID: 38192639 PMCID: PMC10773242 DOI: 10.2147/jhl.s389245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 12/19/2023] [Indexed: 01/10/2024] Open
Abstract
Current literature validates the magnitude of physician burnout as a complex challenge affecting physicians, patients, and healthcare delivery that mandates science-informed intervention. Physician burnout affects both individual practitioners and patient care delivery. Interventions, defined as roadmaps, to prioritizing and supporting personal wellbeing encompass organizational, individual, and moral injury, with virtually no consensus on optimal approaches. The purpose of this conceptual review is to present evidence-based innovative insights on contributing factors, mitigation, and designing adaptive systems to combat and prevent burnout. Science-informed policy initiatives that support long-term organizational changes endorsed by both leadership and institutional stakeholders are keys to sustaining personal wellbeing and ending burnout.
Collapse
Affiliation(s)
- Louise Underdahl
- College of Doctoral Studies, University of Phoenix, Phoenix, AZ, USA
| | - Mary Ditri
- Community Health, New Jersey Hospital Association, Princeton, NJ, USA
| | - Lunthita M Duthely
- Obstetrics, Gynecology and Reproductive Sciences and the Department of Public Health Sciences, University of Miami Health System, Miami, FL, USA
| |
Collapse
|
3
|
Wang Y, Fu W, Gu Y, Fang W, Zhang Y, Jin C, Yin J, Wang W, Xu H, Ge X, Ye C, Tang L, Fang J, Wang D, Su L, Wang J, Zhang X, Feng R. Comparative survey among paediatricians, nurses and health information technicians on ethics implementation knowledge of and attitude towards social experiments based on medical artificial intelligence at children's hospitals in Shanghai: a cross-sectional study. BMJ Open 2023; 13:e071288. [PMID: 37989373 PMCID: PMC10668289 DOI: 10.1136/bmjopen-2022-071288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 11/05/2023] [Indexed: 11/23/2023] Open
Abstract
OBJECTIVES Implementing ethics is crucial to prevent harm and promote widespread benefits in social experiments based on medical artificial intelligence (MAI). However, insufficient information is available concerning this within the paediatric healthcare sector. We aimed to conduct a comparative survey among paediatricians, nurses and health information technicians regarding ethics implementation knowledge of and attitude towards MAI social experiments at children's hospitals in Shanghai. DESIGN AND SETTING A cross-sectional electronic questionnaire was administered from 1 July 2022 to 31 July 2022, at tertiary children's hospitals in Shanghai. PARTICIPANTS All the eligible individuals were recruited. The inclusion criteria were as follows: (1) should be a paediatrician, nurse and health information technician, (2) should have been engaged in or currently participating in social experiments based on MAI, and (3) voluntary participation in the survey. PRIMARY OUTCOME Ethics implementation knowledge of and attitude to MAI social experiments among paediatricians, nurses and health information technicians. RESULTS There were 137 paediatricians, 135 nurses and 60 health information technicians who responded to the questionnaire at tertiary children's hospitals. 2.4-9.6% of participants were familiar with ethics implementation knowledge of MAI social experiments. 31.9-86.1% of participants held an 'agree' ethics implementation attitude. Health information technicians accounted for the highest proportion of the participants who were familiar with the knowledge of implementing ethics, and paediatricians or nurses accounted for the highest proportion among those who held 'agree' attitudes. CONCLUSIONS There is a significant knowledge gap and variations in attitudes among paediatricians, nurses and health information technicians, which underscore the urgent need for individualised education and training programmes to enhance MAI ethics implementation in paediatric healthcare.
Collapse
Affiliation(s)
- Yingwen Wang
- Nursing Department, Children's Hospital of Fudan University, Shanghai, China
| | - Weijia Fu
- Medical Information Center, Children's Hospital of Fudan University, Shanghai, China
| | - Ying Gu
- Nursing Department, Children's Hospital of Fudan University, Shanghai, China
| | - Weihan Fang
- Shanghai Pinghe Bilingual School, Shanghai, China
| | - Yuejie Zhang
- School of Computer Science, Fudan University, Shanghai, China
| | - Cheng Jin
- School of Computer Science, Fudan University, Shanghai, China
| | - Jie Yin
- School of Philosophy, Fudan University, Shanghai, China
| | - Weibing Wang
- School of Public Health, Fudan University, Shanghai, China
| | - Hong Xu
- Nephrology Department, Children's Hospital of Fudan University, Shanghai, China
| | - Xiaoling Ge
- Statistical and Data Management Center, Children's Hospital of Fudan University, Shanghai, China
| | - Chengjie Ye
- Medical Information Center, Children's Hospital of Fudan University, Shanghai, China
| | - Liangfeng Tang
- Medical Information Center, Children's Hospital of Fudan University, Shanghai, China
| | - Jinwu Fang
- School of Public Health, Fudan University, Shanghai, China
| | - Daoyang Wang
- School of Computer Science, Fudan University, Shanghai, China
| | - Ling Su
- Children's Hospital of Fudan University, Shanghai, China
| | - Jiayu Wang
- Medical Information Center, Children's Hospital of Fudan University, Shanghai, China
| | - Xiaobo Zhang
- Respiratory Department, Children's Hospital of Fudan University, Shanghai, China
| | - Rui Feng
- Medical Information Center, Children's Hospital of Fudan University, Shanghai, China
- School of Computer Science, Fudan University, Shanghai, China
| |
Collapse
|
4
|
Mello-Thoms C, Mello CAB. Clinical applications of artificial intelligence in radiology. Br J Radiol 2023; 96:20221031. [PMID: 37099398 PMCID: PMC10546456 DOI: 10.1259/bjr.20221031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 03/28/2023] [Accepted: 03/28/2023] [Indexed: 04/27/2023] Open
Abstract
The rapid growth of medical imaging has placed increasing demands on radiologists. In this scenario, artificial intelligence (AI) has become an attractive partner, one that may complement case interpretation and may aid in various non-interpretive aspects of the work in the radiological clinic. In this review, we discuss interpretative and non-interpretative uses of AI in the clinical practice, as well as report on the barriers to AI's adoption in the clinic. We show that AI currently has a modest to moderate penetration in the clinical practice, with many radiologists still being unconvinced of its value and the return on its investment. Moreover, we discuss the radiologists' liabilities regarding the AI decisions, and explain how we currently do not have regulation to guide the implementation of explainable AI or of self-learning algorithms.
Collapse
Affiliation(s)
| | - Carlos A B Mello
- Centro de Informática, Universidade Federal de Pernambuco, Recife, Brazil
| |
Collapse
|
5
|
Akinrinmade AO, Adebile TM, Ezuma-Ebong C, Bolaji K, Ajufo A, Adigun AO, Mohammad M, Dike JC, Okobi OE. Artificial Intelligence in Healthcare: Perception and Reality. Cureus 2023; 15:e45594. [PMID: 37868407 PMCID: PMC10587915 DOI: 10.7759/cureus.45594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/19/2023] [Indexed: 10/24/2023] Open
Abstract
Artificial intelligence (AI) has birthed the new "big thing" in modern medicine. It promises to bring about safer and improved care that will be beneficial to patients and become a helpful tool in the hands of a skilled physician. Despite its anticipation, however, the implementation and usage of AI are still in their elementary phases, particularly due to legal and ethical considerations that border on "data." These challenges should not be brushed aside but rather be recognized and resolved to enable acceptance by all relevant stakeholders without prejudice. Once these challenges can be overcome, AI will truly revolutionize the field of medicine with improved diagnostic accuracy, a reduction in physician burnout, and an enhanced treatment modality. It is therefore paramount that AI be embraced by physicians and integrated into medical education in order to be well-prepared for our role in the future of medicine.
Collapse
Affiliation(s)
- Abidemi O Akinrinmade
- Medicine and Surgery, Benjamin S. Carson School of Medicine, Babcock University, Ilishan-Remo, NGA
| | - Temitayo M Adebile
- Public Health, Georgia Southern University, Statesboro, USA
- Nephrology, Boston Medical Center, Malden, USA
| | | | | | | | - Aisha O Adigun
- Infectious Diseases, University of Louisville, Louisville, USA
| | - Majed Mohammad
- Geriatrics, Mount Carmel Grove City Hospital, Grove City, USA
| | - Juliet C Dike
- Internal Medicine, University of Calabar, Calabar, NGA
| | - Okelue E Okobi
- Family Medicine, Larkin Community Hospital Palm Springs Campus, Miami, USA
- Family Medicine, Medficient Health Systems, Laurel, USA
- Family Medicine, Lakeside Medical Center, Belle Glade, USA
| |
Collapse
|
6
|
Kierner S, Kucharski J, Kierner Z. Taxonomy of hybrid architectures involving rule-based reasoning and machine learning in clinical decision systems: A scoping review. J Biomed Inform 2023; 144:104428. [PMID: 37355025 DOI: 10.1016/j.jbi.2023.104428] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Revised: 03/28/2023] [Accepted: 06/15/2023] [Indexed: 06/26/2023]
Abstract
BACKGROUND As the application of Artificial Intelligence (AI) technologies increases in the healthcare sector, the industry faces a need to combine medical knowledge, often expressed as clinical rules, with advances in machine learning (ML), which offer high prediction accuracy at the expense of transparency of decision making. PURPOSE This paper seeks to review the present literature, identify hybrid architecture patterns that incorporate rules and machine learning, and evaluate the rationale behind their selection to inform future development and research on the design of transparent and precise clinical decision systems. METHODS PubMed, IEEE Explore, and Google Scholar were queried in search for papers from 1992 to 2022, with the keywords: "clinical decision system", "hybrid clinical architecture", "machine learning and clinical rules". Excluded articles did not use both ML and rules or did not provide any explanation of employed architecture. A proposed taxonomy was used to organize the results, analyze them, and depict them in graphical and tabular form. Two researchers, one with expertise in rule-based systems and another in ML, reviewed identified papers and discussed the work to minimize bias, and the third one re-reviewed the work to ensure consistency of reporting. RESULTS The authors screened 957 papers and reviewed 71 that met their criteria. Five distinct architecture archetypes were determined: Rules are Embedded in ML architecture (REML) (most used), ML pre-processes input data for Rule-Based inference (MLRB), Rule-Based method pre-processes input data for ML prediction (RBML), Rules influence ML training (RMLT), Parallel Ensemble of Rules and ML (PERML), which was rarely observed in clinical contexts. CONCLUSIONS Most architectures in the reviewed literature prioritize prediction accuracy over explainability and trustworthiness, which has led to more complex embedded approaches. Alternatively, parallel (PERML) architectures may be employed, allowing for a more transparent system that is easier to explain to patients and clinicians. The potential of this approach warrants further research. OTHER A limitation of the study may be that it reviews scientific literature, while algorithms implemented in clinical practice may present different distributions of motivations and implementations of hybrid architectures.
Collapse
Affiliation(s)
- Slawomir Kierner
- Lodz University of Technology, Faculty of Electrical, Electronic, Computer and Control Engineering, 27 Isabella Street, 02116 Boston, MA, USA.
| | - Jacek Kucharski
- Lodz University of Technology, Faculty of Electrical, Electronic, Computer and Control Engineering, 18/22 Stefanowskiego St., 90-924 Łodź, Poland.
| | - Zofia Kierner
- University of California, Berkeley College of Letters & Science, Berkeley, CA 94720-1786, USA.
| |
Collapse
|
7
|
Amanian A, Heffernan A, Ishii M, Creighton FX, Thamboo A. The Evolution and Application of Artificial Intelligence in Rhinology: A State of the Art Review. Otolaryngol Head Neck Surg 2023; 169:21-30. [PMID: 35787221 PMCID: PMC11110957 DOI: 10.1177/01945998221110076] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 06/10/2022] [Indexed: 11/16/2022]
Abstract
OBJECTIVE To provide a comprehensive overview on the applications of artificial intelligence (AI) in rhinology, highlight its limitations, and propose strategies for its integration into surgical practice. DATA SOURCES Medline, Embase, CENTRAL, Ei Compendex, IEEE, and Web of Science. REVIEW METHODS English studies from inception until January 2022 and those focusing on any application of AI in rhinology were included. Study selection was independently performed by 2 authors; discrepancies were resolved by the senior author. Studies were categorized by rhinology theme, and data collection comprised type of AI utilized, sample size, and outcomes, including accuracy and precision among others. CONCLUSIONS An overall 5435 articles were identified. Following abstract and title screening, 130 articles underwent full-text review, and 59 articles were selected for analysis. Eleven studies were from the gray literature. Articles were stratified into image processing, segmentation, and diagnostics (n = 27); rhinosinusitis classification (n = 14); treatment and disease outcome prediction (n = 8); optimizing surgical navigation and phase assessment (n = 3); robotic surgery (n = 2); olfactory dysfunction (n = 2); and diagnosis of allergic rhinitis (n = 3). Most AI studies were published from 2016 onward (n = 45). IMPLICATIONS FOR PRACTICE This state of the art review aimed to highlight the increasing applications of AI in rhinology. Next steps will entail multidisciplinary collaboration to ensure data integrity, ongoing validation of AI algorithms, and integration into clinical practice. Future research should be tailored at the interplay of AI with robotics and surgical education.
Collapse
Affiliation(s)
- Ameen Amanian
- Division of Otolaryngology–Head and Neck Surgery, Department of Surgery, University of British Columbia, Vancouver, Canada
| | - Austin Heffernan
- Division of Otolaryngology–Head and Neck Surgery, Department of Surgery, University of British Columbia, Vancouver, Canada
| | - Masaru Ishii
- Department of Otolaryngology–Head and Neck Surgery, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| | - Francis X. Creighton
- Department of Otolaryngology–Head and Neck Surgery, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| | - Andrew Thamboo
- Division of Otolaryngology–Head and Neck Surgery, Department of Surgery, University of British Columbia, Vancouver, Canada
| |
Collapse
|
8
|
Tang L, Li J, Fantus S. Medical artificial intelligence ethics: A systematic review of empirical studies. Digit Health 2023; 9:20552076231186064. [PMID: 37434728 PMCID: PMC10331228 DOI: 10.1177/20552076231186064] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 06/16/2023] [Indexed: 07/13/2023] Open
Abstract
Background Artificial intelligence (AI) technologies are transforming medicine and healthcare. Scholars and practitioners have debated the philosophical, ethical, legal, and regulatory implications of medical AI, and empirical research on stakeholders' knowledge, attitude, and practices has started to emerge. This study is a systematic review of published empirical studies of medical AI ethics with the goal of mapping the main approaches, findings, and limitations of scholarship to inform future practice considerations. Methods We searched seven databases for published peer-reviewed empirical studies on medical AI ethics and evaluated them in terms of types of technologies studied, geographic locations, stakeholders involved, research methods used, ethical principles studied, and major findings. Findings Thirty-six studies were included (published 2013-2022). They typically belonged to one of the three topics: exploratory studies of stakeholder knowledge and attitude toward medical AI, theory-building studies testing hypotheses regarding factors contributing to stakeholders' acceptance of medical AI, and studies identifying and correcting bias in medical AI. Interpretation There is a disconnect between high-level ethical principles and guidelines developed by ethicists and empirical research on the topic and a need to embed ethicists in tandem with AI developers, clinicians, patients, and scholars of innovation and technology adoption in studying medical AI ethics.
Collapse
Affiliation(s)
- Lu Tang
- Department of Communication and Journalism, Texas A&M University, College Station, TX, USA
| | - Jinxu Li
- Department of Communication and Journalism, Texas A&M University, College Station, TX, USA
| | - Sophia Fantus
- School of Social Work, University of Texas at Arlington, Arlington, TX, USA
| |
Collapse
|
9
|
Liu DS, Sawyer J, Luna A, Aoun J, Wang J, Boachie L, Halabi S, Joe B. Perceptions of US Medical Students on Artificial Intelligence in Medicine: Mixed Methods Survey Study. JMIR Med Educ 2022; 8:e38325. [PMID: 36269641 PMCID: PMC9636531 DOI: 10.2196/38325] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 08/31/2022] [Accepted: 09/12/2022] [Indexed: 06/01/2023]
Abstract
BACKGROUND Given the rapidity with which artificial intelligence is gaining momentum in clinical medicine, current physician leaders have called for more incorporation of artificial intelligence topics into undergraduate medical education. This is to prepare future physicians to better work together with artificial intelligence technology. However, the first step in curriculum development is to survey the needs of end users. There has not been a study to determine which media and which topics are most preferred by US medical students to learn about the topic of artificial intelligence in medicine. OBJECTIVE We aimed to survey US medical students on the need to incorporate artificial intelligence in undergraduate medical education and their preferred means to do so to assist with future education initiatives. METHODS A mixed methods survey comprising both specific questions and a write-in response section was sent through Qualtrics to US medical students in May 2021. Likert scale questions were used to first assess various perceptions of artificial intelligence in medicine. Specific questions were posed regarding learning format and topics in artificial intelligence. RESULTS We surveyed 390 US medical students with an average age of 26 (SD 3) years from 17 different medical programs (the estimated response rate was 3.5%). A majority (355/388, 91.5%) of respondents agreed that training in artificial intelligence concepts during medical school would be useful for their future. While 79.4% (308/388) were excited to use artificial intelligence technologies, 91.2% (353/387) either reported that their medical schools did not offer resources or were unsure if they did so. Short lectures (264/378, 69.8%), formal electives (180/378, 47.6%), and Q and A panels (167/378, 44.2%) were identified as preferred formats, while fundamental concepts of artificial intelligence (247/379, 65.2%), when to use artificial intelligence in medicine (227/379, 59.9%), and pros and cons of using artificial intelligence (224/379, 59.1%) were the most preferred topics for enhancing their training. CONCLUSIONS The results of this study indicate that current US medical students recognize the importance of artificial intelligence in medicine and acknowledge that current formal education and resources to study artificial intelligence-related topics are limited in most US medical schools. Respondents also indicated that a hybrid formal/flexible format would be most appropriate for incorporating artificial intelligence as a topic in US medical schools. Based on these data, we conclude that there is a definitive knowledge gap in artificial intelligence education within current medical education in the US. Further, the results suggest there is a disparity in opinions on the specific format and topics to be introduced.
Collapse
Affiliation(s)
- David Shalom Liu
- College of Medicine and Life Sciences, University of Toledo, Toledo, OH, United States
| | - Jake Sawyer
- College of Medicine and Life Sciences, University of Toledo, Toledo, OH, United States
| | - Alexander Luna
- College of Medicine and Life Sciences, University of Toledo, Toledo, OH, United States
| | - Jihad Aoun
- College of Medicine and Life Sciences, University of Toledo, Toledo, OH, United States
| | - Janet Wang
- College of Medicine and Life Sciences, University of Toledo, Toledo, OH, United States
| | - Lord Boachie
- College of Medicine and Life Sciences, University of Toledo, Toledo, OH, United States
| | - Safwan Halabi
- Pediatric Radiology, Ann & Robert H Lurie Children's Hospital of Chicago, Chicago, IL, United States
| | - Bina Joe
- Department of Physiology and Pharmacology, College of Medicine and Life Sciences, University of Toledo, Toledo, OH, United States
| |
Collapse
|
10
|
Akhtar N, Khan N, Qayyum S, Qureshi MI, Hishan SS. Efficacy and pitfalls of digital technologies in healthcare services: A systematic review of two decades. Front Public Health 2022; 10:869793. [PMID: 36187628 PMCID: PMC9523565 DOI: 10.3389/fpubh.2022.869793] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2022] [Accepted: 08/08/2022] [Indexed: 01/21/2023] Open
Abstract
The use of technology in the healthcare sector and its medical practices, from patient record maintenance to diagnostics, has significantly improved the health care emergency management system. At that backdrop, it is crucial to explore the role and challenges of these technologies in the healthcare sector. Therefore, this study provides a systematic review of the literature on technological developments in the healthcare sector and deduces its pros and cons. We curate the published studies from the Web of Science and Scopus databases by using PRISMA 2015 guidelines. After mining the data, we selected only 55 studies for the systematic literature review and bibliometric analysis. The study explores four significant classifications of technological development in healthcare: (a) digital technologies, (b) artificial intelligence, (c) blockchain, and (d) the Internet of Things. The novel contribution of current study indicate that digital technologies have significantly influenced the healthcare services such as the beginning of electronic health record, a new era of digital healthcare, while robotic surgeries and machine learning algorithms may replace practitioners as future technologies. However, a considerable number of studies have criticized these technologies in the health sector based on trust, security, privacy, and accuracy. The study suggests that future studies, on technological development in healthcare services, may take into account these issues for sustainable development of the healthcare sector.
Collapse
Affiliation(s)
- Nadeem Akhtar
- School of Urban Culture, South China Normal University, Foshan, China
| | - Nohman Khan
- UniKL Business School, Universiti Kuala Lumpur, Kuala Lumpur, Malaysia
| | - Shazia Qayyum
- Institute of Applied Psychology, University of the Punjab, Lahore, Pakistan
| | - Muhammad Imran Qureshi
- Teesside University International Business School, Middlesbrough, United Kingdom,*Correspondence: Muhammad Imran Qureshi
| | - Snail S. Hishan
- Azman Hashim International Business School, Universiti Teknologi, Kuala Lumpur, Malaysia,Independent Researcher, THRIVE Project, Brisbane, QLD, Australia
| |
Collapse
|
11
|
Rojas JC, Fahrenbach J, Makhni S, Cook SC, Williams JS, Umscheid CA, Chin MH. Framework for Integrating Equity Into Machine Learning Models. Chest 2022; 161:1621-1627. [DOI: 10.1016/j.chest.2022.02.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 02/01/2022] [Accepted: 02/01/2022] [Indexed: 11/23/2022] Open
|
12
|
Vearrier L, Derse AR, Basford JB, Larkin GL, Moskop JC. Artificial Intelligence in Emergency Medicine: Benefits, Risks, and Recommendations. J Emerg Med 2022; 62:492-499. [PMID: 35164977 DOI: 10.1016/j.jemermed.2022.01.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 12/12/2021] [Accepted: 01/16/2022] [Indexed: 01/04/2023]
Abstract
BACKGROUND Artificial intelligence (AI) can be described as the use of computers to perform tasks that formerly required human cognition. The American Medical Association prefers the term 'augmented intelligence' over 'artificial intelligence' to emphasize the assistive role of computers in enhancing physician skills as opposed to replacing them. The integration of AI into emergency medicine, and clinical practice at large, has increased in recent years, and that trend is likely to continue. DISCUSSION AI has demonstrated substantial potential benefit for physicians and patients. These benefits are transforming the therapeutic relationship from the traditional physician-patient dyad into a triadic doctor-patient-machine relationship. New AI technologies, however, require careful vetting, legal standards, patient safeguards, and provider education. Emergency physicians (EPs) should recognize the limits and risks of AI as well as its potential benefits. CONCLUSIONS EPs must learn to partner with, not capitulate to, AI. AI has proven to be superior to, or on a par with, certain physician skills, such as interpreting radiographs and making diagnoses based on visual cues, such as skin cancer. AI can provide cognitive assistance, but EPs must interpret AI results within the clinical context of individual patients. They must also advocate for patient confidentiality, professional liability coverage, and the essential role of specialty-trained EPs.
Collapse
Affiliation(s)
- Laura Vearrier
- Department of Emergency Medicine, University of Mississippi Medical Center, Jackson, Mississippi
| | - Arthur R Derse
- Center for Bioethics, Medical Humanities, and Department of Emergency Medicine, Medical College of Wisconsin, Wauwatosa, Wisconsin
| | - Jesse B Basford
- Departments of Family and Emergency Medicine, Alabama College of Osteopathic Medicine, Dothan, Alabama
| | - Gregory Luke Larkin
- Department of Emergency Medicine, Northeast Ohio Medical University, Rootstown, Ohio
| | - John C Moskop
- Department of Internal Medicine, Wake Forest School of Medicine, Winston-Salem, North Carolina
| |
Collapse
|
13
|
Romero RA, Young SD. Public perceptions and implementation considerations on the use of artificial intelligence in health. J Eval Clin Pract 2022; 28:75-78. [PMID: 33977613 DOI: 10.1111/jep.13580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 04/23/2021] [Indexed: 11/30/2022]
Affiliation(s)
- Romina A Romero
- Department of Emergency Medicine, University of California, Irvine, Irvine, CA, USA
| | - Sean D Young
- Department of Emergency Medicine, University of California, Irvine, Irvine, CA, USA.,University of California Institute for Prediction Technology, Department of Informatics, University of California, Irvine, Irvine, CA, USA
| |
Collapse
|
14
|
Abstract
Wearables have become a natural element of human life, determining our way of perceiving, understanding and experiencing the world. Enriched with elements of artificial intelligence, they will change our habits and draw us into the digital dimension of the world - a space of uninterrupted interaction between people and technology. As a result, there are still new ideas for the effective use of AI wearables in the consumer space. The main aim of the article is to examine the determinants behind the acceptance of the AI wearables, with particular emphasis on the strength and nature of the relationship between the consumer and technology. The UTAUT2 model is used for this purpose. The article is a continuation of the previous reflections and analyses in this area; at the same time it constitutes an initial stage of research on the issues related to the adoption of AI wearables.
Collapse
|
15
|
Xu L, Sanders L, Li K, Chow JCL. Chatbot for Health Care and Oncology Applications Using Artificial Intelligence and Machine Learning: Systematic Review. JMIR Cancer 2021; 7:e27850. [PMID: 34847056 PMCID: PMC8669585 DOI: 10.2196/27850] [Citation(s) in RCA: 79] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 07/02/2021] [Accepted: 09/18/2021] [Indexed: 01/01/2023] Open
Abstract
Background Chatbot is a timely topic applied in various fields, including medicine and health care, for human-like knowledge transfer and communication. Machine learning, a subset of artificial intelligence, has been proven particularly applicable in health care, with the ability for complex dialog management and conversational flexibility. Objective This review article aims to report on the recent advances and current trends in chatbot technology in medicine. A brief historical overview, along with the developmental progress and design characteristics, is first introduced. The focus will be on cancer therapy, with in-depth discussions and examples of diagnosis, treatment, monitoring, patient support, workflow efficiency, and health promotion. In addition, this paper will explore the limitations and areas of concern, highlighting ethical, moral, security, technical, and regulatory standards and evaluation issues to explain the hesitancy in implementation. Methods A search of the literature published in the past 20 years was conducted using the IEEE Xplore, PubMed, Web of Science, Scopus, and OVID databases. The screening of chatbots was guided by the open-access Botlist directory for health care components and further divided according to the following criteria: diagnosis, treatment, monitoring, support, workflow, and health promotion. Results Even after addressing these issues and establishing the safety or efficacy of chatbots, human elements in health care will not be replaceable. Therefore, chatbots have the potential to be integrated into clinical practice by working alongside health practitioners to reduce costs, refine workflow efficiencies, and improve patient outcomes. Other applications in pandemic support, global health, and education are yet to be fully explored. Conclusions Further research and interdisciplinary collaboration could advance this technology to dramatically improve the quality of care for patients, rebalance the workload for clinicians, and revolutionize the practice of medicine.
Collapse
Affiliation(s)
- Lu Xu
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada.,Department of Medical Biophysics, Western University, London, ON, Canada
| | - Leslie Sanders
- Department of Humanities, York University, Toronto, ON, Canada
| | - Kay Li
- Department of English, York University, Toronto, ON, Canada
| | - James C L Chow
- Department of Medical Physics, Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada.,Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
16
|
Abstract
Although existing work draws attention to a range of obstacles in realizing fair AI, the field lacks an account that emphasizes how these worries hang together in a systematic way. Furthermore, a review of the fair AI and philosophical literature demonstrates the unsuitability of 'treat like cases alike' and other intuitive notions as conceptions of fairness. That review then generates three desiderata for a replacement conception of fairness valuable to AI research: (1) It must provide a meta-theory for understanding tradeoffs, entailing that it must be flexible enough to capture diverse species of objection to decisions. (2) It must not appeal to an impartial perspective (neutral data, objective data, or final arbiter.) (3) It must foreground the way in which judgments of fairness are sensitive to context, i.e., to historical and institutional states of affairs. We argue that a conception of fairness as appropriate concession in the historical iteration of institutional decisions meets these three desiderata. On the basis of this definition, we organize the insights of commentators into a process-structure map of the ethical territory that we hope will bring clarity to computer scientists and ethicists analyzing Fair AI while clearing some ground for further technical and philosophical work.
Collapse
Affiliation(s)
- Ryan van Nood
- Department of Philosophy, Purdue University, 100 N. University Street, West Lafayette, IN, 47907, USA
| | - Christopher Yeomans
- Department of Philosophy, Purdue University, 100 N. University Street, West Lafayette, IN, 47907, USA.
| |
Collapse
|
17
|
Martinho A, Kroesen M, Chorus C. A healthy debate: Exploring the views of medical doctors on the ethics of artificial intelligence. Artif Intell Med 2021; 121:102190. [PMID: 34763805 DOI: 10.1016/j.artmed.2021.102190] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Revised: 09/22/2021] [Accepted: 09/29/2021] [Indexed: 12/23/2022]
Abstract
Artificial Intelligence (AI) is moving towards the health space. It is generally acknowledged that, while there is great promise in the implementation of AI technologies in healthcare, it also raises important ethical issues. In this study we surveyed medical doctors based in The Netherlands, Portugal, and the U.S. from a diverse mix of medical specializations about the ethics surrounding Health AI. Four main perspectives have emerged from the data representing different views about this matter. The first perspective (AI is a helpful tool: Let physicians do what they were trained for) highlights the efficiency associated with automation, which will allow doctors to have the time to focus on expanding their medical knowledge and skills. The second perspective (Rules & Regulations are crucial: Private companies only think about money) shows strong distrust in private tech companies and emphasizes the need for regulatory oversight. The third perspective (Ethics is enough: Private companies can be trusted) puts more trust in private tech companies and maintains that ethics is sufficient to ground these corporations. And finally the fourth perspective (Explainable AI tools: Learning is necessary and inevitable) emphasizes the importance of explainability of AI tools in order to ensure that doctors are engaged in the technological progress. Each perspective provides valuable and often contrasting insights about ethical issues that should be operationalized and accounted for in the design and development of AI Health.
Collapse
Affiliation(s)
| | | | - Caspar Chorus
- Delft University of Technology, Delft, the Netherlands
| |
Collapse
|
18
|
Stevens BR, Pepine CJ. Emerging role of machine learning in cardiovascular disease investigation and translations. Am Heart J Plus 2021; 11:100050. [PMID: 38559318 PMCID: PMC10978128 DOI: 10.1016/j.ahjo.2021.100050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 08/02/2021] [Accepted: 09/01/2021] [Indexed: 04/04/2024]
Abstract
Unexpected insights and practical advances in cardiovascular disease (CVD) are being discovered by rapidly advancing developments in supercomputers and machine learning (ML) software algorithms. These have been accelerated during the COVID-19 pandemic, and the resulting CVD translational implications of ML are steering new measures of prevention and treatment, new tools for objective clinical diagnosis, and even opportunities for rethinking basic foundations of CVD nosology. As the usual cardiovascular specialist may not be familiar with these tools, the editor has invited this brief overview.
Collapse
Affiliation(s)
- Bruce R. Stevens
- Department of Physiology and Functional Genomics, University of Florida College of Medicine, Gainesville, FL, USA
| | - Carl J. Pepine
- Division of Cardiovascular Medicine, University of Florida College of Medicine, Gainesville, FL, USA
| |
Collapse
|
19
|
Abstract
The use of artificial intelligence and machine learning (ML) has revolutionized our daily lives and will soon be instrumental in healthcare delivery. The rise of ML is due to multiple factors: increasing access to massive datasets, exponential increases in processing power, and key algorithmic developments that allow ML models to tackle increasingly challenging questions. Progressively more transplantation research is exploring the potential utility of ML models throughout the patient journey, although this has not yet widely transitioned into the clinical domain. In this review, we explore common approaches used in ML in solid organ clinical transplantation and consider opportunities for ML to help clinicians and patients. We discuss ways in which ML can aid leverage of large complex datasets, generate cutting-edge prediction models, perform clinical image analysis, discover novel markers in molecular data, and fuse datasets to generate novel insights in modern transplantation practice. We focus on key areas in transplantation in which ML is driving progress, explore the future potential roles of ML, and discuss the challenges and limitations of these powerful tools.
Collapse
Affiliation(s)
- Katie L Connor
- Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, United Kingdom.,Edinburgh Transplant Unit, Royal Infirmary of Edinburgh, Edinburgh, United Kingdom.,Centre for Inflammation Research, University of Edinburgh, Edinburgh, United Kingdom
| | - Eoin D O'Sullivan
- Centre for Inflammation Research, University of Edinburgh, Edinburgh, United Kingdom
| | - Lorna P Marson
- Edinburgh Transplant Unit, Royal Infirmary of Edinburgh, Edinburgh, United Kingdom.,Centre for Inflammation Research, University of Edinburgh, Edinburgh, United Kingdom
| | - Stephen J Wigmore
- Edinburgh Transplant Unit, Royal Infirmary of Edinburgh, Edinburgh, United Kingdom.,Centre for Inflammation Research, University of Edinburgh, Edinburgh, United Kingdom
| | - Ewen M Harrison
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
20
|
Abstract
The advances in artificial intelligence (AI) provide an opportunity to expand the frontier of medicine to improve diagnosis, efficiency and management. By extension of being able to perform any task that a human could, a machine that meets the requirements of artificial general intelligence ('strong' AI; AGI) possesses the basic necessities to perform as, or at least qualify to become, a doctor. In this emerging field, this article explores the distinctions between doctors and AGI, and the prerequisites for AGI performing as clinicians. In doing so, it necessitates the requirement for a classification of medical AI and prepares for the development of AGI. With its imminent arrival, it is beneficial to create a framework from which leading institutions can define specific criteria for AGI.
Collapse
Affiliation(s)
- Fawz Kazzazi
- Mason Institute for Medicine, Life Sciences and Law, Edinburgh, UK
| |
Collapse
|
21
|
Castiglioni I, Rundo L, Codari M, Di Leo G, Salvatore C, Interlenghi M, Gallivanone F, Cozzi A, D'Amico NC, Sardanelli F. AI applications to medical images: From machine learning to deep learning. Phys Med 2021; 83:9-24. [PMID: 33662856 DOI: 10.1016/j.ejmp.2021.02.006] [Citation(s) in RCA: 138] [Impact Index Per Article: 46.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 02/09/2021] [Accepted: 02/13/2021] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Artificial intelligence (AI) models are playing an increasing role in biomedical research and healthcare services. This review focuses on challenges points to be clarified about how to develop AI applications as clinical decision support systems in the real-world context. METHODS A narrative review has been performed including a critical assessment of articles published between 1989 and 2021 that guided challenging sections. RESULTS We first illustrate the architectural characteristics of machine learning (ML)/radiomics and deep learning (DL) approaches. For ML/radiomics, the phases of feature selection and of training, validation, and testing are described. DL models are presented as multi-layered artificial/convolutional neural networks, allowing us to directly process images. The data curation section includes technical steps such as image labelling, image annotation (with segmentation as a crucial step in radiomics), data harmonization (enabling compensation for differences in imaging protocols that typically generate noise in non-AI imaging studies) and federated learning. Thereafter, we dedicate specific sections to: sample size calculation, considering multiple testing in AI approaches; procedures for data augmentation to work with limited and unbalanced datasets; and the interpretability of AI models (the so-called black box issue). Pros and cons for choosing ML versus DL to implement AI applications to medical imaging are finally presented in a synoptic way. CONCLUSIONS Biomedicine and healthcare systems are one of the most important fields for AI applications and medical imaging is probably the most suitable and promising domain. Clarification of specific challenging points facilitates the development of such systems and their translation to clinical practice.
Collapse
Affiliation(s)
- Isabella Castiglioni
- Department of Physics, Università degli Studi di Milano-Bicocca, Piazza della Scienza 3, 20126 Milano, Italy; Institute of Biomedical Imaging and Physiology, National Research Council, Via Fratelli Cervi 93, 20090 Segrate, Italy.
| | - Leonardo Rundo
- Department of Radiology, Box 218, Cambridge Biomedical Campus, Cambridge CB2 0QQ, United Kingdom; Cancer Research UK Cambridge Centre, University of Cambridge Li Ka Shing Centre, Robinson Way, Cambridge CB2 0RE, United Kingdom.
| | - Marina Codari
- Department of Radiology, Stanford University School of Medicine, Stanford University, 300 Pasteur Drive, Stanford, CA, USA.
| | - Giovanni Di Leo
- Unit of Radiology, IRCCS Policlinico San Donato, Via Rodolfo Morandi 30, 20097 San Donato Milanese, Italy.
| | - Christian Salvatore
- Scuola Universitaria Superiore IUSS Pavia, Piazza della Vittoria 15, 27100 Pavia, Italy; DeepTrace Technologies S.r.l., Via Conservatorio 17, 20122 Milano, Italy.
| | - Matteo Interlenghi
- DeepTrace Technologies S.r.l., Via Conservatorio 17, 20122 Milano, Italy.
| | - Francesca Gallivanone
- Institute of Biomedical Imaging and Physiology, National Research Council, Via Fratelli Cervi 93, 20090 Segrate, Italy.
| | - Andrea Cozzi
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Luigi Mangiagalli 31, 20133 Milano, Italy.
| | - Natascha Claudia D'Amico
- Department of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano S.p.A., Via Saint Bon 20, 20147 Milano, Italy; Unit of Computer Systems and Bioinformatics, Department of Engineering, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo 21, 00128 Roma, Italy.
| | - Francesco Sardanelli
- Unit of Radiology, IRCCS Policlinico San Donato, Via Rodolfo Morandi 30, 20097 San Donato Milanese, Italy; Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Luigi Mangiagalli 31, 20133 Milano, Italy.
| |
Collapse
|
22
|
Sutton RA, Sharma P. Overcoming barriers to implementation of artificial intelligence in gastroenterology. Best Pract Res Clin Gastroenterol 2021; 52-53:101732. [PMID: 34172254 DOI: 10.1016/j.bpg.2021.101732] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 02/08/2021] [Indexed: 01/31/2023]
Abstract
Artificial intelligence is poised to revolutionize the field of medicine, however significant questions must be answered prior to its implementation on a regular basis. Many artificial intelligence algorithms remain limited by isolated datasets which may cause selection bias and truncated learning for the program. While a central database may solve this issue, several barriers such as security, patient consent, and management structure prevent this from being implemented. An additional barrier to daily use is device approval by the Food and Drug Administration. In order for this to occur, clinical studies must address new endpoints, including and beyond the traditional bio- and medical statistics. These must showcase artificial intelligence's benefit and answer key questions, including challenges posed in the field of medical ethics.
Collapse
Affiliation(s)
- Richard A Sutton
- University of Kansas Medical Center 3901 Rainbow Blvd, Kansas City, KS, USA; Kansas City Veteran's Affairs Medical Center 4801 Linwood Blvd, Kansas City, MO, USA.
| | - Prateek Sharma
- University of Kansas Medical Center 3901 Rainbow Blvd, Kansas City, KS, USA; Kansas City Veteran's Affairs Medical Center 4801 Linwood Blvd, Kansas City, MO, USA.
| |
Collapse
|
23
|
Baxter MS, White A, Lahti M, Murto T, Evans J. Machine learning in a time of COVID-19 - Can machine learning support Community Health Workers (CHWs) in low and middle income countries (LMICs) in the new normal? J Glob Health 2021; 11:03017. [PMID: 33643627 PMCID: PMC7898557 DOI: 10.7189/jogh.11.03017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Affiliation(s)
- Mats Stage Baxter
- Centre for Medical Informatics, Usher institute, The University of Edinburgh, Edinburgh, UK
| | - Alan White
- Institute of Applied Health Science, University of Aberdeen, Aberdeen, UK.,Interactive Health Ltd, Inverness, UK
| | - Mari Lahti
- Health and Well-being, Turku University of Applied Sciences, Turku, Finland
| | - Tiina Murto
- Health and Well-being, Turku University of Applied Sciences, Turku, Finland
| | - Jay Evans
- Global Health Academy, Usher Institute, The University of Edinburgh, Edinburgh, UK
| |
Collapse
|
24
|
Abstract
Recent news of Catholic and secular healthcare systems sharing electronic health record (EHR) data with technology companies for the purposes of developing artificial intelligence (AI) applications has drawn attention to the ethical and social challenges of such collaborations, including threats to patient privacy and confidentiality, undermining of patient consent, and lack of corporate transparency. Although the United States Catholic Conference of Bishops' Ethical and Religious Directives for Health Care Services (ERDs) address collaborations between US Catholic healthcare providers and other entities, the ERDs do not adequately address the novel concerns seen in EHR data-sharing for AI development. Neither does the Health Insurance Portability and Accountability Act (HIPAA) privacy rule. This article describes ethical and social problems observed in recent patient data-sharing collaborations with AI companies and analyzes them in light of the guiding principles of the ERDs as well as the 2020 Rome Call to AI Ethics (RCAIE) document recently released by the Vatican. While both the ERDs and RCAIE guiding principles can inform future collaborations, we suggest that the next revision of the ERDs should consider addressing data-sharing and AI more directly. Summary Electronic health record data-sharing with artificial intelligence developers presents unique ethical and social challenges that can be addressed with updated United States Catholic Conference of Bishops' Ethical and Religious Directives and guidance from the Vatican's 2020 Rome Call to AI Ethics.
Collapse
Affiliation(s)
| | - Emily E Anderson
- Neiswanger Institute for Bioethics, Stritch School of Medicine, Loyola University Chicago, Maywood, IL, USA
| |
Collapse
|
25
|
Abstract
Artificial intelligence (AI) is a path-breaking advancement for many industries, including the health care sector. The expeditious development of information technology and data processing has led to the formation of recent tools known as artificial intelligence. Radiology has been a portal for medical technological advancements, and AI will likely be no dissimilar. Radiology is the platform for many technological advances in the medical field; AI can undoubtedly impact every step of a radiologist's workflow. AI can simplify every activity like ordering and scheduling, protocoling and acquisition, image interpretation, reporting, communication, and billing. AI has eminent potential to augment efficiency and accuracy throughout radiology, but it also possesses inherent drawbacks and biases. We collected studies that were published in the past five years using PubMed as our database. We chose studies that were relevant to artificial intelligence in radiology. We mainly focused on the overview of AI in radiology, components included in the functioning of AI, AI assisting in the radiologists' workflow, ethical aspects of AI, challenges, and biases that AI experiencing together with some clinical applications of AI. Of all 33 studies, we found 15 articles discussed the overview and components of AI, five articles about AI affecting radiologist's workflow, five articles related to challenges and biases in AI, two articles discussed ethical aspects of AI, and six articles about practical implications of AI. We found out that the application of AI could make time-dependent tasks that can be performed effortlessly, permitting radiologists more time and opportunities to engage in patient care via increased time for consultation and development in imaging and extracting useful data from those images. AI could only be an aid to radiologists but will not replace a radiologist. Radiologists who use AI to their benefit, rather than to avoid it out of fear, might supersede those radiologists who do not. Substantial research should be done regarding the practical implications of AI algorithms for residents curriculum and the benefits of AI in radiology.
Collapse
|
26
|
Alami H, Lehoux P, Auclair Y, de Guise M, Gagnon MP, Shaw J, Roy D, Fleet R, Ag Ahmed MA, Fortin JP. Artificial Intelligence and Health Technology Assessment: Anticipating a New Level of Complexity. J Med Internet Res 2020; 22:e17707. [PMID: 32406850 PMCID: PMC7380986 DOI: 10.2196/17707] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2020] [Revised: 04/25/2020] [Accepted: 05/13/2020] [Indexed: 12/12/2022] Open
Abstract
Artificial intelligence (AI) is seen as a strategic lever to improve access, quality, and efficiency of care and services and to build learning and value-based health systems. Many studies have examined the technical performance of AI within an experimental context. These studies provide limited insights into the issues that its use in a real-world context of care and services raises. To help decision makers address these issues in a systemic and holistic manner, this viewpoint paper relies on the health technology assessment core model to contrast the expectations of the health sector toward the use of AI with the risks that should be mitigated for its responsible deployment. The analysis adopts the perspective of payers (ie, health system organizations and agencies) because of their central role in regulating, financing, and reimbursing novel technologies. This paper suggests that AI-based systems should be seen as a health system transformation lever, rather than a discrete set of technological devices. Their use could bring significant changes and impacts at several levels: technological, clinical, human and cognitive (patient and clinician), professional and organizational, economic, legal, and ethical. The assessment of AI's value proposition should thus go beyond technical performance and cost logic by performing a holistic analysis of its value in a real-world context of care and services. To guide AI development, generate knowledge, and draw lessons that can be translated into action, the right political, regulatory, organizational, clinical, and technological conditions for innovation should be created as a first step.
Collapse
Affiliation(s)
- Hassane Alami
- Public Health Research Center, Université de Montréal, Montreal, QC, Canada
- Department of Health Management, Evaluation and Policy, École de santé publique de l'Université de Montréal, Montreal, QC, Canada
- Institut national d'excellence en santé et services sociaux, Montréal, QC, Canada
| | - Pascale Lehoux
- Public Health Research Center, Université de Montréal, Montreal, QC, Canada
- Department of Health Management, Evaluation and Policy, École de santé publique de l'Université de Montréal, Montreal, QC, Canada
| | - Yannick Auclair
- Institut national d'excellence en santé et services sociaux, Montréal, QC, Canada
| | - Michèle de Guise
- Institut national d'excellence en santé et services sociaux, Montréal, QC, Canada
| | - Marie-Pierre Gagnon
- Research Center on Healthcare and Services in Primary Care, Université Laval, Quebec, QC, Canada
- Faculty of Nursing Science, Université Laval, Quebec, QC, Canada
| | - James Shaw
- Joint Centre for Bioethics, University of Toronto, Toronto, ON, Canada
- Institute for Health System Solutions and Virtual Care, Women's College Hospital, Toronto, ON, Canada
| | - Denis Roy
- Institut national d'excellence en santé et services sociaux, Montréal, QC, Canada
| | - Richard Fleet
- Research Center on Healthcare and Services in Primary Care, Université Laval, Quebec, QC, Canada
- Department of Family Medicine and Emergency Medicine, Faculty of Medicine, Université Laval, Quebec, QC, Canada
- Research Chair in Emergency Medicine, Université Laval - CHAU Hôtel-Dieu de Lévis, Lévis, QC, Canada
| | - Mohamed Ali Ag Ahmed
- Research Chair on Chronic Diseases in Primary Care, Université de Sherbrooke, Chicoutimi, QC, Canada
| | - Jean-Paul Fortin
- Research Center on Healthcare and Services in Primary Care, Université Laval, Quebec, QC, Canada
- Department of Social and Preventive Medicine, Faculty of Medicine, Université Laval, Quebec, QC, Canada
| |
Collapse
|
27
|
Abstract
PurposeThe subject of the article is the concept of augmented intelligence, which constitutes a further stage in the development of research on artificial intelligence. This is a new phenomenon that has rarely been considered in the subject literature so far, which may be interesting for the fields of social sciences and humanities. The aim is to describe the features of this technology and determine the practical and ethical problems associated with its implementation in libraries.Design/methodology/approachThe method of literature review was used. Systematic searches according to specific questions were carried out using the Scopus and Web of Science scientific databases, as well as Google Scholar and the LISTA abstract database.FindingsThe results established that the issue of augmented intelligence has barely been discussed in the field of librarianship. Although this technology may be interesting as a new area of librarian research and as a new framework for designing innovative services, deep ethical consideration is necessary before this technology is introduced in libraries.Research limitations/implicationsThe article deals with some of the newest technologies available, and this topic is generally very rarely discussed in scientific publications in either the social sciences or humanities. Therefore, due to the limited availability of materials, the findings presented in the article are primarily of a conceptual nature. The aim is to present this topic from the perspective of librarianship and to create a starting point for further discussion on the ethical aspects of introducing new technologies in libraries.Practical implicationsThe results can be widely used in practice as a framework for the implementation of augmented intelligence in libraries.Social implicationsThe article can help to facilitate the debate on the role of implementing new technologies in libraries.Originality/valueThe problem of augmented intelligence is very rarely addressed in the subject literature in the field of library and information science.
Collapse
|
28
|
Abstract
Artificial intelligence (AI) is quickly expanding within the sphere of health care, offering the potential to enhance the efficiency of care delivery, diminish costs, and reduce diagnostic and therapeutic errors. As the field of otolaryngology also explores use of AI technology in patient care, a number of ethical questions warrant attention prior to widespread implementation of AI. This commentary poses many of these ethical questions for consideration by the otolaryngologist specifically, using the 4 pillars of medical ethics-autonomy, beneficence, nonmaleficence, and justice-as a framework and advocating both for the assistive role of AI in health care and for the shared decision-making, empathic approach to patient care.
Collapse
Affiliation(s)
- Alexandra M Arambula
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, USA
| | - Andrés M Bur
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, USA
| |
Collapse
|
29
|
Armitage R. We must oppose lethal autonomous weapons systems. Br J Gen Pract 2019; 69:510-511. [DOI: 10.3399/bjgp19x705869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022] Open
|
30
|
Javorsky E, Tegmark M, Helfand I. Lethal autonomous weapons. BMJ 2019; 364:l1171. [PMID: 30910775 DOI: 10.1136/bmj.l1171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Affiliation(s)
| | - Max Tegmark
- Department of Physics & Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Ira Helfand
- International Physicians for the Prevention of Nuclear War, Malden, MA, USA
| |
Collapse
|
31
|
Kahn CE. Do the Right Thing. Radiol Artif Intell 2019; 1:e194001. [PMID: 33937790 PMCID: PMC8017388 DOI: 10.1148/ryai.2019194001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
|