1
|
Waugh J, Evans J, Miocevic M, Lockie D, Aminzadeh P, Lynch A, Bell RJ. Performance of artificial intelligence in 7533 consecutive prevalent screening mammograms from the BreastScreen Australia program. Eur Radiol 2024; 34:3947-3957. [PMID: 37955669 DOI: 10.1007/s00330-023-10396-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 08/15/2023] [Accepted: 09/05/2023] [Indexed: 11/14/2023]
Abstract
OBJECTIVES To assess the performance of an artificial intelligence (AI) algorithm in the Australian mammography screening program which routinely uses two independent readers with arbitration of discordant results. METHODS A total of 7533 prevalent round mammograms from 2017 were available for analysis. The AI program classified mammograms into deciles on the basis of breast cancer (BC) risk. BC diagnoses, including invasive BC (IBC) and ductal carcinoma in situ (DCIS), included those from the prevalent round, interval cancers, and cancers identified in the subsequent screening round two years later. Performance was assessed by sensitivity, specificity, positive and negative predictive values, and the proportion of women recalled by the radiologists and identified as higher risk by AI. RESULTS Radiologists identified 54 women with IBC and 13 with DCIS with a recall rate of 9.7%. In contrast, 51 of 54 of the IBCs and 12/13 cases of DCIS were within the higher AI score group (score 10), a recall equivalent of 10.6% (a difference of 0.9% (CI -0.03 to 1.89%, p = 0.06). When IBCs were identified in the 2017 round, interval cancers classified as false negatives or with minimal signs in 2017, and cancers from the 2019 round were combined, the radiologists identified 54/67 and 59/67 were in the highest risk AI category (sensitivity 80.6% and 88.06 % respectively, a difference that was not different statistically). CONCLUSIONS As the performance of AI was comparable to that of expert radiologists, future AI roles in screening could include replacing one reader and supporting arbitration, reducing workload and false positive results. CLINICAL RELEVANCE STATEMENT AI analysis of consecutive prevalent screening mammograms from the Australian BreastScreen program demonstrated the algorithm's ability to match the cancer detection of experienced radiologists, additionally identifying five interval cancers (false negatives), and the majority of the false positive recalls. KEY POINTS • The AI program was almost as sensitive as the radiologists in terms of identifying prevalent lesions (51/54 for invasive breast cancer, 63/67 when including ductal carcinoma in situ). • If selected interval cancers and cancers identified in the subsequent screening round were included, the AI program identified more cancers than the radiologists (59/67 compared with 54/67, sensitivity 88.06 % and 80.6% respectively p = 0.24). • The high negative predictive value of a score of 1-9 would indicate a role for AI as a triage tool to reduce the recall rate (specifically false positives).
Collapse
Affiliation(s)
- John Waugh
- Monash BreastScreen, Monash Cancer Centre, Moorabbin Hospital, 823-865 Centre Road, Bentleigh East, Victoria, 3165, Australia.
| | - Jill Evans
- Monash BreastScreen, Monash Cancer Centre, Moorabbin Hospital, 823-865 Centre Road, Bentleigh East, Victoria, 3165, Australia
| | - Miranda Miocevic
- Monash BreastScreen, Monash Cancer Centre, Moorabbin Hospital, 823-865 Centre Road, Bentleigh East, Victoria, 3165, Australia
| | - Darren Lockie
- Monash BreastScreen, Monash Cancer Centre, Moorabbin Hospital, 823-865 Centre Road, Bentleigh East, Victoria, 3165, Australia
| | - Parisa Aminzadeh
- Monash BreastScreen, Monash Cancer Centre, Moorabbin Hospital, 823-865 Centre Road, Bentleigh East, Victoria, 3165, Australia
| | - Anne Lynch
- Monash BreastScreen, Monash Cancer Centre, Moorabbin Hospital, 823-865 Centre Road, Bentleigh East, Victoria, 3165, Australia
| | - Robin J Bell
- Women's Health Research Program, School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, 3004, Australia
| |
Collapse
|
2
|
Al-Karawi D, Al-Zaidi S, Helael KA, Obeidat N, Mouhsen AM, Ajam T, Alshalabi BA, Salman M, Ahmed MH. A Review of Artificial Intelligence in Breast Imaging. Tomography 2024; 10:705-726. [PMID: 38787015 PMCID: PMC11125819 DOI: 10.3390/tomography10050055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 04/14/2024] [Accepted: 05/06/2024] [Indexed: 05/25/2024] Open
Abstract
With the increasing dominance of artificial intelligence (AI) techniques, the important prospects for their application have extended to various medical fields, including domains such as in vitro diagnosis, intelligent rehabilitation, medical imaging, and prognosis. Breast cancer is a common malignancy that critically affects women's physical and mental health. Early breast cancer screening-through mammography, ultrasound, or magnetic resonance imaging (MRI)-can substantially improve the prognosis for breast cancer patients. AI applications have shown excellent performance in various image recognition tasks, and their use in breast cancer screening has been explored in numerous studies. This paper introduces relevant AI techniques and their applications in the field of medical imaging of the breast (mammography and ultrasound), specifically in terms of identifying, segmenting, and classifying lesions; assessing breast cancer risk; and improving image quality. Focusing on medical imaging for breast cancer, this paper also reviews related challenges and prospects for AI.
Collapse
Affiliation(s)
- Dhurgham Al-Karawi
- Medical Analytica Ltd., 26a Castle Park Industrial Park, Flint CH6 5XA, UK;
| | - Shakir Al-Zaidi
- Medical Analytica Ltd., 26a Castle Park Industrial Park, Flint CH6 5XA, UK;
| | - Khaled Ahmad Helael
- Royal Medical Services, King Hussein Medical Hospital, King Abdullah II Ben Al-Hussein Street, Amman 11855, Jordan;
| | - Naser Obeidat
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Abdulmajeed Mounzer Mouhsen
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Tarek Ajam
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Bashar A. Alshalabi
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Mohamed Salman
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Mohammed H. Ahmed
- School of Computing, Coventry University, 3 Gulson Road, Coventry CV1 5FB, UK;
| |
Collapse
|
3
|
Coffey K, Aukland B, Amir T, Sevilimedu V, Saphier NB, Mango VL. Artificial Intelligence Decision Support for Triple-Negative Breast Cancers on Ultrasound. JOURNAL OF BREAST IMAGING 2024; 6:33-44. [PMID: 38243859 DOI: 10.1093/jbi/wbad080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Indexed: 01/22/2024]
Abstract
OBJECTIVE To assess performance of an artificial intelligence (AI) decision support software in assessing and recommending biopsy of triple-negative breast cancers (TNBCs) on US. METHODS Retrospective institutional review board-approved review identified patients diagnosed with TNBC after US-guided biopsy between 2009 and 2019. Artificial intelligence output for TNBCs on diagnostic US included lesion features (shape, orientation) and likelihood of malignancy category (benign, probably benign, suspicious, and probably malignant). Artificial intelligence true positive was defined as suspicious or probably malignant and AI false negative (FN) as benign or probably benign. Artificial intelligence and radiologist lesion feature agreement, AI and radiologist sensitivity and FN rate (FNR), and features associated with AI FNs were determined using Wilcoxon rank-sum test, Fisher's exact test, chi-square test of independence, and kappa statistics. RESULTS The study included 332 patients with 345 TNBCs. Artificial intelligence and radiologists demonstrated moderate agreement for lesion shape and orientation (k = 0.48 and k = 0.47, each P <.001). On the set of examinations using 6 earlier diagnostic US, radiologists recommended biopsy of 339/345 lesions (sensitivity 98.3%, FNR 1.7%), and AI recommended biopsy of 333/345 lesions (sensitivity 96.5%, FNR 3.5%), including 6/6 radiologist FNs. On the set of examinations using immediate prebiopsy diagnostic US, AI recommended biopsy of 331/345 lesions (sensitivity 95.9%, FNR 4.1%). Artificial intelligence FNs were more frequently oval (q < 0.001), parallel (q < 0.001), circumscribed (q = 0.04), and complex cystic and solid (q = 0.006). CONCLUSION Artificial intelligence accurately recommended biopsies for 96% to 97% of TNBCs on US and may assist radiologists in classifying these lesions, which often demonstrate benign sonographic features.
Collapse
Affiliation(s)
- Kristen Coffey
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Brianna Aukland
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Tali Amir
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Varadan Sevilimedu
- Department of Biostatistics and Epidemiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Nicole B Saphier
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Victoria L Mango
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| |
Collapse
|
4
|
Yoen H, Jang MJ, Yi A, Moon WK, Chang JM. Artificial Intelligence for Breast Cancer Detection on Mammography: Factors Related to Cancer Detection. Acad Radiol 2024:S1076-6332(23)00679-7. [PMID: 38216413 DOI: 10.1016/j.acra.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 12/01/2023] [Accepted: 12/01/2023] [Indexed: 01/14/2024]
Abstract
RATIONALE AND OBJECTIVES Little is known about the factors affecting the Artificial Intelligence (AI) software performance on mammography for breast cancer detection. This study was to identify factors associated with abnormality scores assigned by the AI software. MATERIALS AND METHODS A retrospective database search was conducted to identify consecutive asymptomatic women who underwent breast cancer surgery between April 2016 and December 2019. A commercially available AI software (Lunit INSIGHT, MMG, Ver. 1.1.4.0) was used for preoperative mammography to assign individual abnormality scores to the lesions and score of 10 or higher was considered as positive detection by AI software. Radiologists without knowledge of the AI results retrospectively assessed the mammographic density and classified mammographic findings into positive and negative finding. General linear model (GLM) analysis was used to identify the clinical, pathological, and mammographic findings related to the abnormality scores, obtaining coefficient β values that represent the mean difference per unit or comparison with the reference value. Additionally, the reasons for non-detection by the AI software were investigated. RESULTS Among the 1001 index cancers (830 invasive cancers and 171 ductal carcinoma in situs) in 1001 patients, 717 (72%) were correctly detected by AI, while the remaining 284 (28%) were not detected. Multivariable GLM analysis showed that abnormal mammography findings (β = 77.0 for mass, β = 73.1 for calcification only, β = 49.4 for architectural distortion, and β = 47.6 for asymmetry compared to negative; all Ps < 0.001), invasive tumor size (β = 4.3 per 1 cm, P < 0.001), and human epidermal growth receptor type 2 (HER2) positivity (β = 9.2 compared to hormone receptor positive, HER2 negative, P = 0.004) were associated with higher mean abnormality score. AI failed to detect small asymmetries in extremely dense breasts, subcentimeter-sized or isodense lesions, and faint amorphous calcifications. CONCLUSION Cancers with positive abnormal mammographic findings on retrospective review, large invasive size, HER2 positivity had high AI abnormality scores. Understanding the patterns of AI software performance is crucial for effectively integrating AI into clinical practice.
Collapse
Affiliation(s)
- Heera Yoen
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, Republic of Korea
| | - Myoung-Jin Jang
- Medical Research Collaborating Center, Seoul National University Hospital, Seoul, Republic of Korea
| | - Ann Yi
- Department of Radiology, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Korea
| | - Woo Kyung Moon
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, Republic of Korea
| | - Jung Min Chang
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, Republic of Korea.
| |
Collapse
|
5
|
Li JW, Sheng DL, Chen JG, You C, Liu S, Xu HX, Chang C. Artificial intelligence in breast imaging: potentials and challenges. Phys Med Biol 2023; 68:23TR01. [PMID: 37722385 DOI: 10.1088/1361-6560/acfade] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 09/18/2023] [Indexed: 09/20/2023]
Abstract
Breast cancer, which is the most common type of malignant tumor among humans, is a leading cause of death in females. Standard treatment strategies, including neoadjuvant chemotherapy, surgery, postoperative chemotherapy, targeted therapy, endocrine therapy, and radiotherapy, are tailored for individual patients. Such personalized therapies have tremendously reduced the threat of breast cancer in females. Furthermore, early imaging screening plays an important role in reducing the treatment cycle and improving breast cancer prognosis. The recent innovative revolution in artificial intelligence (AI) has aided radiologists in the early and accurate diagnosis of breast cancer. In this review, we introduce the necessity of incorporating AI into breast imaging and the applications of AI in mammography, ultrasonography, magnetic resonance imaging, and positron emission tomography/computed tomography based on published articles since 1994. Moreover, the challenges of AI in breast imaging are discussed.
Collapse
Affiliation(s)
- Jia-Wei Li
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Dan-Li Sheng
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Jian-Gang Chen
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication & Electronic Engineering, East China Normal University, People's Republic of China
| | - Chao You
- Department of Radiology, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, People's Republic of China
| | - Shuai Liu
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, People's Republic of China
| | - Hui-Xiong Xu
- Department of Ultrasound, Zhongshan Hospital, Institute of Ultrasound in Medicine and Engineering, Fudan University, Shanghai, 200032, People's Republic of China
| | - Cai Chang
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| |
Collapse
|
6
|
Yan S, Li J, Wu W. Artificial intelligence in breast cancer: application and future perspectives. J Cancer Res Clin Oncol 2023; 149:16179-16190. [PMID: 37656245 DOI: 10.1007/s00432-023-05337-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 08/24/2023] [Indexed: 09/02/2023]
Abstract
Breast cancer is one of the most common cancers and is one of the leading causes of cancer-related deaths in women worldwide. Early diagnosis and treatment are the key for a favorable prognosis. The application of artificial intelligence technology in the medical field is increasingly extensive, including image analysis, automated diagnosis, intelligent pharmaceutical system, personalized treatment and so on. AI-based breast cancer imaging, pathology and adjuvant therapy technology cannot only reduce the workload of clinicians, but also continuously improve the accuracy and sensitivity of breast cancer diagnosis and treatment. This paper reviews the application of AI in breast cancer, as well as looks ahead and poses challenges to the future development of AI for breast cancer detection and therapeutic, so as to provide ideas for future research.
Collapse
Affiliation(s)
- Shuixin Yan
- The Affiliated Lihuili Hospital of Ningbo University, Ningbo, 315000, Zhejiang, China
| | - Jiadi Li
- The Affiliated Lihuili Hospital of Ningbo University, Ningbo, 315000, Zhejiang, China
| | - Weizhu Wu
- The Affiliated Lihuili Hospital of Ningbo University, Ningbo, 315000, Zhejiang, China.
| |
Collapse
|
7
|
Hosseini F, Asadi F, Emami H, Ebnali M. Machine learning applications for early detection of esophageal cancer: a systematic review. BMC Med Inform Decis Mak 2023; 23:124. [PMID: 37460991 DOI: 10.1186/s12911-023-02235-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 07/12/2023] [Indexed: 07/20/2023] Open
Abstract
INTRODUCTION Esophageal cancer (EC) is a significant global health problem, with an estimated 7th highest incidence and 6th highest mortality rate. Timely diagnosis and treatment are critical for improving patients' outcomes, as over 40% of patients with EC are diagnosed after metastasis. Recent advances in machine learning (ML) techniques, particularly in computer vision, have demonstrated promising applications in medical image processing, assisting clinicians in making more accurate and faster diagnostic decisions. Given the significance of early detection of EC, this systematic review aims to summarize and discuss the current state of research on ML-based methods for the early detection of EC. METHODS We conducted a comprehensive systematic search of five databases (PubMed, Scopus, Web of Science, Wiley, and IEEE) using search terms such as "ML", "Deep Learning (DL (", "Neural Networks (NN)", "Esophagus", "EC" and "Early Detection". After applying inclusion and exclusion criteria, 31 articles were retained for full review. RESULTS The results of this review highlight the potential of ML-based methods in the early detection of EC. The average accuracy of the reviewed methods in the analysis of endoscopic and computed tomography (CT (images of the esophagus was over 89%, indicating a high impact on early detection of EC. Additionally, the highest percentage of clinical images used in the early detection of EC with the use of ML was related to white light imaging (WLI) images. Among all ML techniques, methods based on convolutional neural networks (CNN) achieved higher accuracy and sensitivity in the early detection of EC compared to other methods. CONCLUSION Our findings suggest that ML methods may improve accuracy in the early detection of EC, potentially supporting radiologists, endoscopists, and pathologists in diagnosis and treatment planning. However, the current literature is limited, and more studies are needed to investigate the clinical applications of these methods in early detection of EC. Furthermore, many studies suffer from class imbalance and biases, highlighting the need for validation of detection algorithms across organizations in longitudinal studies.
Collapse
Affiliation(s)
- Farhang Hosseini
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Farkhondeh Asadi
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Hassan Emami
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mahdi Ebnali
- Department of Emergency Medicine, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
8
|
Panico A, Gatta G, Salvia A, Grezia GD, Fico N, Cuccurullo V. Radiomics in Breast Imaging: Future Development. J Pers Med 2023; 13:jpm13050862. [PMID: 37241032 DOI: 10.3390/jpm13050862] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 05/02/2023] [Accepted: 05/16/2023] [Indexed: 05/28/2023] Open
Abstract
Breast cancer is the most common and most commonly diagnosed non-skin cancer in women. There are several risk factors related to habits and heredity, and screening is essential to reduce the incidence of mortality. Thanks to screening and increased awareness among women, most breast cancers are diagnosed at an early stage, increasing the chances of cure and survival. Regular screening is essential. Mammography is currently the gold standard for breast cancer diagnosis. In mammography, we can encounter problems with the sensitivity of the instrument; in fact, in the case of a high density of glands, the ability to detect small masses is reduced. In fact, in some cases, the lesion may not be particularly evident, it may be hidden, and it is possible to incur false negatives as partial details that may escape the radiologist's eye. The problem is, therefore, substantial, and it makes sense to look for techniques that can increase the quality of diagnosis. In recent years, innovative techniques based on artificial intelligence have been used in this regard, which are able to see where the human eye cannot reach. In this paper, we can see the application of radiomics in mammography.
Collapse
Affiliation(s)
- Alessandra Panico
- Radiology Division, Department of Precision Medicine, Università della Campania "Luigi Vanvitelli", 80138 Naples, Italy
| | - Gianluca Gatta
- Radiology Division, Department of Precision Medicine, Università della Campania "Luigi Vanvitelli", 80138 Naples, Italy
| | - Antonio Salvia
- Radiology Division, Department of Precision Medicine, Università della Campania "Luigi Vanvitelli", 80138 Naples, Italy
| | | | - Noemi Fico
- Department of Physics "Ettore Pancini", Università di Napoli Federico II, 80126 Naples, Italy
| | - Vincenzo Cuccurullo
- Nuclear Medicine Unit, Department of Precision Medicine, Università della Campania "Luigi Vanvitelli", 80138 Naples, Italy
| |
Collapse
|
9
|
Amir T, Coffey K, Sevilimedu V, Fardanesh R, Mango VL. A role for breast ultrasound Artificial Intelligence decision support in the evaluation of small invasive lobular carcinomas. Clin Imaging 2023; 101:77-85. [PMID: 37311398 DOI: 10.1016/j.clinimag.2023.05.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/24/2023] [Accepted: 05/08/2023] [Indexed: 06/15/2023]
Abstract
OBJECTIVE To evaluate the diagnostic performance of an Artificial Intelligence (AI) decision support (DS) system in the ultrasound (US) assessment of invasive lobular carcinoma (ILC) of the breast, a cancer that can demonstrate variable appearance and present insidiously. METHODS Retrospective review was performed of 75 patients with 83 ILC diagnosed by core biopsy or surgery between November 2017 and November 2019. ILC characteristics (size, shape, echogenicity) were recorded. AI DS output (lesion characteristics, likelihood of malignancy) was compared to radiologist assessment. RESULTS The AI DS system interpreted 100% of ILCs as suspicious or probably malignant (100% sensitivity, and 0% false negative rate). 99% (82/83) of detected ILCs were initially recommended for biopsy by the interpreting breast radiologist, and 100% (83/83) were recommended for biopsy after one additional ILC was identified on same-day repeat diagnostic ultrasound. For lesions in which the AI DS output was probably malignant, but assigned a BI-RADS 4 assessment by the radiologist, the median lesion size was 1 cm, compared with a median lesion size of 1.4 cm for those given a BI-RADS 5 assessment (p = 0.006). These results suggest that AI may offer more useful DS in smaller sub-centimeter lesions in which shape, margin status, or vascularity is more difficult to discern. Only 20% of patients with ILC were assigned a BI-RADS 5 assessment by the radiologist. CONCLUSION The AI DS accurately characterized 100% of detected ILC lesions as suspicious or probably malignant. AI DS may be helpful in increasing radiologist confidence when assessing ILC on ultrasound.
Collapse
Affiliation(s)
- Tali Amir
- Department of Radiology, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY 10065, United States of America.
| | - Kristen Coffey
- Department of Radiology, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY 10065, United States of America.
| | - Varadan Sevilimedu
- Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, 485 Lexington Ave, 2nd floor, New York, NY 10017, United States of America.
| | - Reza Fardanesh
- Department of Radiology, University of California Los Angeles, 1250 16th St, Suite 2340, Santa Monica, CA 90404, United States of America.
| | - Victoria L Mango
- Department of Radiology, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY 10065, United States of America.
| |
Collapse
|
10
|
Trepanier C, Huang A, Liu M, Ha R. Emerging uses of artificial intelligence in breast and axillary ultrasound. Clin Imaging 2023; 100:64-68. [PMID: 37243994 DOI: 10.1016/j.clinimag.2023.05.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 05/02/2023] [Indexed: 05/29/2023]
Abstract
Breast ultrasound is a valuable adjunctive tool to mammography in detecting breast cancer, especially in women with dense breasts. Ultrasound also plays an important role in staging breast cancer by assessing axillary lymph nodes. However, its utility is limited by operator dependence, high recall rate, low positive predictive value and low specificity. These limitations present an opportunity for artificial intelligence (AI) to improve diagnostic performance and pioneer novel uses of ultrasound. Research in developing AI for radiology has flourished over the past few years. A subset of AI, deep learning, uses interconnected computational nodes to form a neural network, which extracts complex visual features from image data to train itself into a predictive model. This review summarizes several key studies evaluating AI programs' performance in predicting breast cancer and demonstrates that AI can assist radiologists and address limitations of ultrasound by acting as a decision support tool. This review also touches on how AI programs allow for novel predictive uses of ultrasound, particularly predicting molecular subtypes of breast cancer and response to neoadjuvant chemotherapy, which have the potential to change how breast cancer is managed by providing non-invasive prognostic and treatment data from ultrasound images. Lastly, this review explores how AI programs demonstrate improved diagnostic accuracy in predicting axillary lymph node metastasis. The limitations and future challenges in developing and implementing AI for breast and axillary ultrasound will also be discussed.
Collapse
Affiliation(s)
- Christopher Trepanier
- Columbia University Irving Medical Center, 622 W 168th St, New York, NY 10032, United States of America.
| | - Alice Huang
- Columbia University Irving Medical Center, 622 W 168th St, New York, NY 10032, United States of America.
| | - Michael Liu
- Columbia University Irving Medical Center, 622 W 168th St, New York, NY 10032, United States of America.
| | - Richard Ha
- Columbia University Irving Medical Center, 622 W 168th St, New York, NY 10032, United States of America.
| |
Collapse
|
11
|
Shimokawa D, Takahashi K, Kurosawa D, Takaya E, Oba K, Yagishita K, Fukuda T, Tsunoda H, Ueda T. Deep learning model for breast cancer diagnosis based on bilateral asymmetrical detection (BilAD) in digital breast tomosynthesis images. Radiol Phys Technol 2023; 16:20-27. [PMID: 36342640 DOI: 10.1007/s12194-022-00686-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 11/09/2022]
Abstract
The purpose of this study was to develop a deep learning model to diagnose breast cancer by embedding a diagnostic algorithm that examines the asymmetry of bilateral breast tissue. This retrospective study was approved by the institutional review board. A total of 115 patients who underwent breast surgery and had pathologically confirmed breast cancer were enrolled in this study. Two image pairs [230 pairs of bilateral breast digital breast tomosynthesis (DBT) images with 115 malignant tumors and contralateral tissue (M/N), and 115 bilateral normal areas (N/N)] were generated from each patient enrolled in this study. The proposed deep learning model is called bilateral asymmetrical detection (BilAD), which is a modified convolutional neural network (CNN) model of Xception with two-dimensional tensors for bilateral breast images. BilAD was trained to classify the differences between pairs of M/N and N/N datasets. The results of the BilAD model were compared to those of the unilateral control CNN model (uCNN). The results of BilAD and the uCNN were as follows: accuracy, 0.84 and 0.75; sensitivity, 0.73 and 0.58; and specificity, 0.93 and 0.92, respectively. The mean area under the receiver operating characteristic curve of BilAD was significantly higher than that of the uCNN (p = 0.02): 0.90 and 0.84, respectively. The proposed deep learning model trained by embedding a diagnostic algorithm to examine the asymmetry of bilateral breast tissue improves the diagnostic accuracy for breast cancer.
Collapse
Affiliation(s)
- Daiki Shimokawa
- Department of Clinical Imaging, Tohoku University Graduate School of Medicine, 2-1 Seiryo-Machi, Aoba-Ku, Sendai, Miyagi, 980-8575, Japan
| | - Kengo Takahashi
- Department of Clinical Imaging, Tohoku University Graduate School of Medicine, 2-1 Seiryo-Machi, Aoba-Ku, Sendai, Miyagi, 980-8575, Japan
| | - Daiya Kurosawa
- Department of Clinical Imaging, Tohoku University Graduate School of Medicine, 2-1 Seiryo-Machi, Aoba-Ku, Sendai, Miyagi, 980-8575, Japan
| | - Eichi Takaya
- AI Lab, Tohoku University Hospital, 1-1 Seiryo-Machi, Aoba-Ku, Sendai, Miyagi, 980-8574, Japan
| | - Ken Oba
- Department of Radiology, St. Luke's International Hospital, 9-1, Akashi-Cho, Chuo-Ku, Tokyo, Tokyo, 104-8560, Japan
| | - Kazuyo Yagishita
- Department of Radiology, St. Luke's International Hospital, 9-1, Akashi-Cho, Chuo-Ku, Tokyo, Tokyo, 104-8560, Japan
| | - Toshinori Fukuda
- Department of Radiology, St. Luke's International Hospital, 9-1, Akashi-Cho, Chuo-Ku, Tokyo, Tokyo, 104-8560, Japan
| | - Hiroko Tsunoda
- Department of Radiology, St. Luke's International Hospital, 9-1, Akashi-Cho, Chuo-Ku, Tokyo, Tokyo, 104-8560, Japan
| | - Takuya Ueda
- Department of Clinical Imaging, Tohoku University Graduate School of Medicine, 2-1 Seiryo-Machi, Aoba-Ku, Sendai, Miyagi, 980-8575, Japan. .,AI Lab, Tohoku University Hospital, 1-1 Seiryo-Machi, Aoba-Ku, Sendai, Miyagi, 980-8574, Japan.
| |
Collapse
|
12
|
Hanafy MM, Ahmed AAH, Ali EA. Mammographically detected asymmetries in the era of artificial intelligence. THE EGYPTIAN JOURNAL OF RADIOLOGY AND NUCLEAR MEDICINE 2023. [DOI: 10.1186/s43055-023-00979-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2023] Open
Abstract
Abstract
Background
Proper assessment of mammographically detected asymmetries is essential to avoid unnecessary biopsies and missed cancers as they may be of a benign or malignant cause. According to ACR BIRADS atlas 2013, mammographically detected asymmetries are classified into asymmetry, focal asymmetry, global asymmetry, and developing asymmetry. We aimed to assess the diagnostic performance of artificial intelligence in mammographically detected asymmetries compared to breast ultrasound as well as combined mammography and ultrasound.
Results
This study was a prospective study that comprised 51 women with breast asymmetry found on screening as well as diagnostic mammography. All participants conducted full-field digital mammography and ultrasound. Then the obtained mammographic images were processed by the artificial intelligence software system. Mammography had a sensitivity of 100%, specificity of 73%, a positive predictive value of 56.52%, a negative predictive value of 100%, and diagnostic accuracy of 80%. The results of Ultrasound revealed a sensitivity of 100.00%, a specificity of 89.47%, a positive predictive value of 76.47%, a negative predictive value of 100.00%, and an accuracy of 92.16%. Combined mammography and breast ultrasound showed a sensitivity of 100.00%, a specificity of 86.84%, a positive predictive value of 72.22%, a negative predictive value of 100.00%, and an accuracy of 90.20%. Artificial intelligence results demonstrated a sensitivity of 84.62%, a specificity of 94.74%, a positive predictive value of 48.26%, a negative predictive value of 94.47%, and an accuracy of 92.16%.
Conclusions
Adding breast ultrasound in the assessment of mammographically detected asymmetries led to better characterization, so it reduced the false-positive results and improved the specificity. Also, Artificial intelligence showed better specificity compared to mammography, breast ultrasound, and combined Mammography and ultrasound, so AI can be used to decrease unnecessary biopsies as it increases confidence in diagnosis, especially in cases with no definite ultrasound suspicious abnormality.
Collapse
|
13
|
Vedantham S, Shazeeb MS, Chiang A, Vijayaraghavan GR. Artificial Intelligence in Breast X-Ray Imaging. Semin Ultrasound CT MR 2023; 44:2-7. [PMID: 36792270 PMCID: PMC9932302 DOI: 10.1053/j.sult.2022.12.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
This topical review is focused on the clinical breast x-ray imaging applications of the rapidly evolving field of artificial intelligence (AI). The range of AI applications is broad. AI can be used for breast cancer risk estimation that could allow for tailoring the screening interval and the protocol that are woman-specific and for triaging the screening exams. It also can serve as a tool to aid in the detection and diagnosis for improved sensitivity and specificity and as a tool to reduce radiologists' reading time. AI can also serve as a potential second 'reader' during screening interpretation. During the last decade, numerous studies have shown the potential of AI-assisted interpretation of mammography and to a lesser extent digital breast tomosynthesis; however, most of these studies are retrospective in nature. There is a need for prospective clinical studies to evaluate these technologies to better understand their real-world efficacy. Further, there are ethical, medicolegal, and liability concerns that need to be considered prior to the routine use of AI in the breast imaging clinic.
Collapse
Affiliation(s)
| | | | - Alan Chiang
- Department of Medical Imaging, University of Arizona, Tucson, AZ
| | | |
Collapse
|
14
|
Derevianko A, Pizzoli SFM, Pesapane F, Rotili A, Monzani D, Grasso R, Cassano E, Pravettoni G. The Use of Artificial Intelligence (AI) in the Radiology Field: What Is the State of Doctor-Patient Communication in Cancer Diagnosis? Cancers (Basel) 2023; 15:cancers15020470. [PMID: 36672417 PMCID: PMC9856827 DOI: 10.3390/cancers15020470] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 01/04/2023] [Accepted: 01/10/2023] [Indexed: 01/14/2023] Open
Abstract
BACKGROUND In the past decade, interest in applying Artificial Intelligence (AI) in radiology to improve diagnostic procedures increased. AI has potential benefits spanning all steps of the imaging chain, from the prescription of diagnostic tests to the communication of test reports. The use of AI in the field of radiology also poses challenges in doctor-patient communication at the time of the diagnosis. This systematic review focuses on the patient role and the interpersonal skills between patients and physicians when AI is implemented in cancer diagnosis communication. METHODS A systematic search was conducted on PubMed, Embase, Medline, Scopus, and PsycNet from 1990 to 2021. The search terms were: ("artificial intelligence" or "intelligence machine") and "communication" "radiology" and "oncology diagnosis". The PRISMA guidelines were followed. RESULTS 517 records were identified, and 5 papers met the inclusion criteria and were analyzed. Most of the articles emphasized the success of the technological support of AI in radiology at the expense of patient trust in AI and patient-centered communication in cancer disease. Practical implications and future guidelines were discussed according to the results. CONCLUSIONS AI has proven to be beneficial in helping clinicians with diagnosis. Future research may improve patients' trust through adequate information about the advantageous use of AI and an increase in medical compliance with adequate training on doctor-patient diagnosis communication.
Collapse
Affiliation(s)
- Alexandra Derevianko
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy
| | - Silvia Francesca Maria Pizzoli
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
- Correspondence: ; Tel.: +39-0294372099
| | - Filippo Pesapane
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20139 Milan, Italy
| | - Anna Rotili
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20139 Milan, Italy
| | - Dario Monzani
- Department of Psychology, Educational Science and Human Movement, University of Palermo, 90128 Palermo, Italy
| | - Roberto Grasso
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Enrico Cassano
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20139 Milan, Italy
| | - Gabriella Pravettoni
- Applied Research Division for Cognitive and Psychological Science, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| |
Collapse
|
15
|
Artificial Intelligence (AI) in Breast Imaging: A Scientometric Umbrella Review. Diagnostics (Basel) 2022; 12:diagnostics12123111. [PMID: 36553119 PMCID: PMC9777253 DOI: 10.3390/diagnostics12123111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/07/2022] [Accepted: 12/08/2022] [Indexed: 12/14/2022] Open
Abstract
Artificial intelligence (AI), a rousing advancement disrupting a wide spectrum of applications with remarkable betterment, has continued to gain momentum over the past decades. Within breast imaging, AI, especially machine learning and deep learning, honed with unlimited cross-data/case referencing, has found great utility encompassing four facets: screening and detection, diagnosis, disease monitoring, and data management as a whole. Over the years, breast cancer has been the apex of the cancer cumulative risk ranking for women across the six continents, existing in variegated forms and offering a complicated context in medical decisions. Realizing the ever-increasing demand for quality healthcare, contemporary AI has been envisioned to make great strides in clinical data management and perception, with the capability to detect indeterminate significance, predict prognostication, and correlate available data into a meaningful clinical endpoint. Here, the authors captured the review works over the past decades, focusing on AI in breast imaging, and systematized the included works into one usable document, which is termed an umbrella review. The present study aims to provide a panoramic view of how AI is poised to enhance breast imaging procedures. Evidence-based scientometric analysis was performed in accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guideline, resulting in 71 included review works. This study aims to synthesize, collate, and correlate the included review works, thereby identifying the patterns, trends, quality, and types of the included works, captured by the structured search strategy. The present study is intended to serve as a "one-stop center" synthesis and provide a holistic bird's eye view to readers, ranging from newcomers to existing researchers and relevant stakeholders, on the topic of interest.
Collapse
|
16
|
Detection of Incidental Pulmonary Embolism on Conventional Contrast-Enhanced Chest CT: Comparison of an Artificial Intelligence Algorithm and Clinical Reports. AJR Am J Roentgenol 2022; 219:895-902. [PMID: 35822644 DOI: 10.2214/ajr.22.27895] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
BACKGROUND. Artificial intelligence (AI) algorithms have shown strong performance for detection of pulmonary embolism (PE) on CT examinations performed using a dedicated protocol for PE detection. AI performance is less well studied for detecting PE on examinations ordered for reasons other than suspected PE (i.e., incidental PE [iPE]). OBJECTIVE. The purpose of this study was to assess the diagnostic performance of an AI algorithm for detection of iPE on conventional contrast-enhanced chest CT examinations. METHODS. This retrospective study included 2555 patients (mean age, 53.2 ± 14.5 [SD] years; 1340 women, 1215 men) who underwent 3003 conventional contrast-enhanced chest CT examinations (i.e., not using pulmonary CTA protocols) between September 2019 and February 2020. A commercial AI algorithm was applied to the images to detect acute iPE. A vendor-supplied natural language processing (NLP) algorithm was applied to the clinical reports to identify examinations interpreted as positive for iPE. For all examinations that were positive by the AI-based image review or by NLP-based report review, a multireader adjudication process was implemented to establish a reference standard for iPE. Images were also reviewed to identify explanations of AI misclassifications. RESULTS. On the basis of the adjudication process, the frequency of iPE was 1.3% (40/3003). AI detected four iPEs missed by clinical reports, and clinical reports detected seven iPEs missed by AI. AI, compared with clinical reports, exhibited significantly lower PPV (86.8% vs 97.3%, p = .03) and specificity (99.8% vs 100.0%, p = .045). Differences in sensitivity (82.5% vs 90.0%, p = .37) and NPV (99.8% vs 99.9%, p = .36) were not significant. For AI, neither sensitivity nor specificity varied significantly in association with age, sex, patient status, or cancer-related clinical scenario (all p > .05). Explanations of false-positives by AI included metastatic lymph nodes and pulmonary venous filling defect, and explanations of false-negatives by AI included surgically altered anatomy and small-caliber subsegmental vessels. CONCLUSION. AI had high NPV and moderate PPV for iPE detection, detecting some iPEs missed by radiologists. CLINICAL IMPACT. Potential applications of the AI tool include serving as a second reader to help detect additional iPEs or as a worklist triage tool to allow earlier iPE detection and intervention. Various explanations of AI misclassifications may provide targets for model improvement.
Collapse
|
17
|
Discrimination between phyllodes tumor and fibro-adenoma: Does artificial intelligence-aided mammograms have an impact? THE EGYPTIAN JOURNAL OF RADIOLOGY AND NUCLEAR MEDICINE 2022. [DOI: 10.1186/s43055-022-00734-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
Abstract
Background
The indulgence of artificial intelligence (AI) has been considered recently in the work up for the detection and diagnosis of breast cancer through algorithms that could supply diagnosis as the radiologist do. The algorithm learns from a supervised and continuous input of large and new data sets unlike the standard programming, which requires clear step-by-step instructions. The aim of this study is to assess the ability of AI scanned mammograms to aid the ultrasound in the discrimination between phyllodes tumors and fibro-adenomas.
Results
This is a retrospective analysis included 374 proven phyllodes tumors (PT) and fibro-adenomas (FA). Digital mammogram and breast ultrasound was performed for all the cases and each breast was given a “Breast Imaging Reporting and Data System” (BI-RADS) score. Included mammograms were scanned by AI with resultant a qualitative heatmap and a quantitative abnormality scoring of suspicion percentage.
The study included 164 PT (43.9%) and 210 FA (56.1%). BI-RADS category 2 was assigned in 40.1%, category 3 in 38.2%, category 4 in 18.5% and category 5 in 3.2% with median value of the AI abnormality scoring of 23%, 44%, 65% and 90% respectively. Sensitivity and specificity of the conventional imaging were 59.2% and 75.8% respectively. The AI abnormality scoring of 49.5% upgraded the sensitivity to 89.6% and specificity to 94.8% in the ability to discriminate PT from FA masses.
Conclusion
Artificial intelligence-aided mammograms could be used as method of distinction between PT from FA detected on sono-mammogram. The color hue and the quantification of the abnormality scoring percentage could be used as a one setting method for specification and so guide clinicians in their decision of conservative management or the choice of the surgical procedure.
Collapse
|
18
|
Hsu W, Hippe DS, Nakhaei N, Wang PC, Zhu B, Siu N, Ahsen ME, Lotter W, Sorensen AG, Naeim A, Buist DSM, Schaffter T, Guinney J, Elmore JG, Lee CI. External Validation of an Ensemble Model for Automated Mammography Interpretation by Artificial Intelligence. JAMA Netw Open 2022; 5:e2242343. [PMID: 36409497 PMCID: PMC9679879 DOI: 10.1001/jamanetworkopen.2022.42343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 09/02/2022] [Indexed: 11/23/2022] Open
Abstract
Importance With a shortfall in fellowship-trained breast radiologists, mammography screening programs are looking toward artificial intelligence (AI) to increase efficiency and diagnostic accuracy. External validation studies provide an initial assessment of how promising AI algorithms perform in different practice settings. Objective To externally validate an ensemble deep-learning model using data from a high-volume, distributed screening program of an academic health system with a diverse patient population. Design, Setting, and Participants In this diagnostic study, an ensemble learning method, which reweights outputs of the 11 highest-performing individual AI models from the Digital Mammography Dialogue on Reverse Engineering Assessment and Methods (DREAM) Mammography Challenge, was used to predict the cancer status of an individual using a standard set of screening mammography images. This study was conducted using retrospective patient data collected between 2010 and 2020 from women aged 40 years and older who underwent a routine breast screening examination and participated in the Athena Breast Health Network at the University of California, Los Angeles (UCLA). Main Outcomes and Measures Performance of the challenge ensemble method (CEM) and the CEM combined with radiologist assessment (CEM+R) were compared with diagnosed ductal carcinoma in situ and invasive cancers within a year of the screening examination using performance metrics, such as sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC). Results Evaluated on 37 317 examinations from 26 817 women (mean [SD] age, 58.4 [11.5] years), individual model AUROC estimates ranged from 0.77 (95% CI, 0.75-0.79) to 0.83 (95% CI, 0.81-0.85). The CEM model achieved an AUROC of 0.85 (95% CI, 0.84-0.87) in the UCLA cohort, lower than the performance achieved in the Kaiser Permanente Washington (AUROC, 0.90) and Karolinska Institute (AUROC, 0.92) cohorts. The CEM+R model achieved a sensitivity (0.813 [95% CI, 0.781-0.843] vs 0.826 [95% CI, 0.795-0.856]; P = .20) and specificity (0.925 [95% CI, 0.916-0.934] vs 0.930 [95% CI, 0.929-0.932]; P = .18) similar to the radiologist performance. The CEM+R model had significantly lower sensitivity (0.596 [95% CI, 0.466-0.717] vs 0.850 [95% CI, 0.766-0.923]; P < .001) and specificity (0.803 [95% CI, 0.734-0.861] vs 0.945 [95% CI, 0.936-0.954]; P < .001) than the radiologist in women with a prior history of breast cancer and Hispanic women (0.894 [95% CI, 0.873-0.910] vs 0.926 [95% CI, 0.919-0.933]; P = .004). Conclusions and Relevance This study found that the high performance of an ensemble deep-learning model for automated screening mammography interpretation did not generalize to a more diverse screening cohort, suggesting that the model experienced underspecification. This study suggests the need for model transparency and fine-tuning of AI models for specific target populations prior to their clinical adoption.
Collapse
Affiliation(s)
- William Hsu
- Medical and Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine at University California, Los Angeles
| | - Daniel S. Hippe
- Clinical Research Division, Fred Hutchinson Cancer Center, Seattle, Washington
| | - Noor Nakhaei
- Medical and Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine at University California, Los Angeles
| | - Pin-Chieh Wang
- Department of Medicine, David Geffen School of Medicine at University California, Los Angeles
| | - Bing Zhu
- Medical and Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine at University California, Los Angeles
| | - Nathan Siu
- Medical Informatics Home Area, Graduate Programs in Biosciences, David Geffen School of Medicine at University California, Los Angeles, Los Angeles, California
| | - Mehmet Eren Ahsen
- Gies College of Business, University of Illinois at Urbana-Champaign
| | - William Lotter
- DeepHealth, RadNet AI Solutions, Cambridge, Massachusetts
| | | | - Arash Naeim
- Center for Systematic, Measurable, Actionable, Resilient, and Technology-driven Health, Clinical and Translational Science Institute, David Geffen School of Medicine at University California, Los Angeles
| | - Diana S. M. Buist
- Kaiser Permanente Washington Health Research Institute, Seattle, Washington
| | | | | | - Joann G. Elmore
- Department of Medicine, David Geffen School of Medicine at University California, Los Angeles
| | - Christoph I. Lee
- Department of Radiology, University of Washington School of Medicine, Seattle
- Department of Health Services, University of Washington School of Public Health, Seattle
- Hutchinson Institute for Cancer Outcomes Research, Fred Hutchinson Cancer Center, Seattle, Washington
| |
Collapse
|
19
|
Bao C, Shen J, Zhang Y, Zhang Y, Wei W, Wang Z, Ding J, Han L. Evaluation of an artificial intelligence support system for breast cancer screening in Chinese people based on mammogram. Cancer Med 2022; 12:3718-3726. [PMID: 36082949 PMCID: PMC9939225 DOI: 10.1002/cam4.5231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 08/16/2022] [Accepted: 08/30/2022] [Indexed: 11/05/2022] Open
Abstract
BACKGROUND To evaluate the diagnostic performance of radiologists on breast cancer with or without artificial intelligence (AI) support. METHODS A retrospective study was performed. In total, 643 mammograms (average age: 54 years; female: 100%; cancer: 62.05%) were randomly allocated into two groups. Seventy-five percent of mammograms in each group were randomly selected for assessment by two independent radiologists, and the rest were read once. Half of the 71 radiologists could read mammograms with AI support, and the other half could not. Sensitivity, specificity, Youden's index, agreement rate, Kappa value, the area under the receiver operating characteristic curve (AUC) and the reading time of radiologists in each group were analyzed. RESULTS The average AUC was higher if the AI support system was used (unaided: 0.84; with AI support: 0.91; p < 0.01). The average sensitivity increased from 84.77% to 95.07% with AI support (p < 0.01), but the average specificity decreased (p = 0.07). Youden's index, agreement rate and Kappa value were larger in the group with AI support, and the average reading time was shorter (p < 0.01). CONCLUSIONS The AI support system might contribute to enhancing the diagnostic performance (e.g., higher sensitivity and AUC) of radiologists. In the future, the AI algorithm should be improved, and prospective studies should be conducted.
Collapse
Affiliation(s)
- Chengzhen Bao
- Beijing Obstetrics and Gynecology HospitalCapital Medical University. Beijing Maternal and Child Health Care HospitalBeijingChina
| | - Jie Shen
- Beijing Obstetrics and Gynecology HospitalCapital Medical University. Beijing Maternal and Child Health Care HospitalBeijingChina
| | - Yue Zhang
- Beijing Obstetrics and Gynecology HospitalCapital Medical University. Beijing Maternal and Child Health Care HospitalBeijingChina
| | - Yan Zhang
- Beijing Obstetrics and Gynecology HospitalCapital Medical University. Beijing Maternal and Child Health Care HospitalBeijingChina
| | - Wei Wei
- Beijing Obstetrics and Gynecology HospitalCapital Medical University. Beijing Maternal and Child Health Care HospitalBeijingChina
| | | | | | - Lili Han
- Beijing Obstetrics and Gynecology HospitalCapital Medical University. Beijing Maternal and Child Health Care HospitalBeijingChina
| |
Collapse
|
20
|
Basurto-Hurtado JA, Cruz-Albarran IA, Toledano-Ayala M, Ibarra-Manzano MA, Morales-Hernandez LA, Perez-Ramirez CA. Diagnostic Strategies for Breast Cancer Detection: From Image Generation to Classification Strategies Using Artificial Intelligence Algorithms. Cancers (Basel) 2022; 14:3442. [PMID: 35884503 PMCID: PMC9322973 DOI: 10.3390/cancers14143442] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Revised: 07/02/2022] [Accepted: 07/12/2022] [Indexed: 02/04/2023] Open
Abstract
Breast cancer is one the main death causes for women worldwide, as 16% of the diagnosed malignant lesions worldwide are its consequence. In this sense, it is of paramount importance to diagnose these lesions in the earliest stage possible, in order to have the highest chances of survival. While there are several works that present selected topics in this area, none of them present a complete panorama, that is, from the image generation to its interpretation. This work presents a comprehensive state-of-the-art review of the image generation and processing techniques to detect Breast Cancer, where potential candidates for the image generation and processing are presented and discussed. Novel methodologies should consider the adroit integration of artificial intelligence-concepts and the categorical data to generate modern alternatives that can have the accuracy, precision and reliability expected to mitigate the misclassifications.
Collapse
Affiliation(s)
- Jesus A. Basurto-Hurtado
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| | - Irving A. Cruz-Albarran
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| | - Manuel Toledano-Ayala
- División de Investigación y Posgrado de la Facultad de Ingeniería (DIPFI), Universidad Autónoma de Querétaro, Cerro de las Campanas S/N Las Campanas, Santiago de Querétaro 76010, Mexico;
| | - Mario Alberto Ibarra-Manzano
- Laboratorio de Procesamiento Digital de Señales, Departamento de Ingeniería Electrónica, Division de Ingenierias Campus Irapuato-Salamanca (DICIS), Universidad de Guanajuato, Carretera Salamanca-Valle de Santiago KM. 3.5 + 1.8 Km., Salamanca 36885, Mexico;
| | - Luis A. Morales-Hernandez
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
| | - Carlos A. Perez-Ramirez
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| |
Collapse
|
21
|
Choudhury A. Toward an Ecologically Valid Conceptual Framework for the Use of Artificial Intelligence in Clinical Settings: Need for Systems Thinking, Accountability, Decision-making, Trust, and Patient Safety Considerations in Safeguarding the Technology and Clinicians. JMIR Hum Factors 2022; 9:e35421. [PMID: 35727615 PMCID: PMC9257623 DOI: 10.2196/35421] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 03/26/2022] [Accepted: 05/20/2022] [Indexed: 01/29/2023] Open
Abstract
The health care management and the medical practitioner literature lack a descriptive conceptual framework for understanding the dynamic and complex interactions between clinicians and artificial intelligence (AI) systems. As most of the existing literature has been investigating AI's performance and effectiveness from a statistical (analytical) standpoint, there is a lack of studies ensuring AI's ecological validity. In this study, we derived a framework that focuses explicitly on the interaction between AI and clinicians. The proposed framework builds upon well-established human factors models such as the technology acceptance model and expectancy theory. The framework can be used to perform quantitative and qualitative analyses (mixed methods) to capture how clinician-AI interactions may vary based on human factors such as expectancy, workload, trust, cognitive variables related to absorptive capacity and bounded rationality, and concerns for patient safety. If leveraged, the proposed framework can help to identify factors influencing clinicians' intention to use AI and, consequently, improve AI acceptance and address the lack of AI accountability while safeguarding the patients, clinicians, and AI technology. Overall, this paper discusses the concepts, propositions, and assumptions of the multidisciplinary decision-making literature, constituting a sociocognitive approach that extends the theories of distributed cognition and, thus, will account for the ecological validity of AI.
Collapse
Affiliation(s)
- Avishek Choudhury
- Industrial and Management Systems Engineering, Benjamin M Statler College of Engineering and Mineral Resources, West Virginia University, Morgantown, WV, United States
| |
Collapse
|
22
|
Forrai G, Kovács E, Ambrózay É, Barta M, Borbély K, Lengyel Z, Ormándi K, Péntek Z, Tünde T, Sebő É. Use of Diagnostic Imaging Modalities in Modern Screening, Diagnostics and Management of Breast Tumours 1st Central-Eastern European Professional Consensus Statement on Breast Cancer. Pathol Oncol Res 2022; 28:1610382. [PMID: 35755417 PMCID: PMC9214693 DOI: 10.3389/pore.2022.1610382] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Accepted: 04/29/2022] [Indexed: 12/11/2022]
Abstract
Breast radiologists and nuclear medicine specialists updated their previous recommendation/guidance at the 4th Hungarian Breast Cancer Consensus Conference in Kecskemét. A recommendation is hereby made that breast tumours should be screened, diagnosed and treated according to these guidelines. These professional guidelines include the latest technical developments and research findings, including the role of imaging methods in therapy and follow-up. It includes details on domestic development proposals and also addresses related areas (forensic medicine, media, regulations, reimbursement). The entire material has been agreed with the related medical disciplines.
Collapse
Affiliation(s)
- Gábor Forrai
- GÉ-RAD Kft., Budapest, Hungary
- Duna Medical Center, Budapest, Hungary
| | - Eszter Kovács
- GÉ-RAD Kft., Budapest, Hungary
- Duna Medical Center, Budapest, Hungary
| | | | | | - Katalin Borbély
- National Institute of Oncology, Budapest, Hungary
- Ministry of Human Capacities, Budapest, Hungary
| | | | | | | | - Tasnádi Tünde
- Dr Réthy Pál Member Hospital of Békés County Central Hospital, Békéscsaba, Hungary
| | - Éva Sebő
- Kenézy Gyula University Hospital, University of Debrecen, Debrecen, Hungary
| |
Collapse
|
23
|
Ahn H, Jun I, Seo KY, Kim EK, Kim TI. Artificial Intelligence for the Estimation of Visual Acuity Using Multi-Source Anterior Segment Optical Coherence Tomographic Images in Senile Cataract. Front Med (Lausanne) 2022; 9:871382. [PMID: 35655854 PMCID: PMC9152093 DOI: 10.3389/fmed.2022.871382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 04/04/2022] [Indexed: 12/05/2022] Open
Abstract
Purpose To investigate an artificial intelligence (AI) model performance using multi-source anterior segment optical coherence tomographic (OCT) images in estimating the preoperative best-corrected visual acuity (BCVA) in patients with senile cataract. Design Retrospective, cross-instrument validation study. Subjects A total of 2,332 anterior segment images obtained using swept-source OCT, optical biometry for intraocular lens calculation, and a femtosecond laser platform in patients with senile cataract and postoperative BCVA ≥ 0.0 logMAR were included in the training/validation dataset. A total of 1,002 images obtained using optical biometry and another femtosecond laser platform in patients who underwent cataract surgery in 2021 were used for the test dataset. Methods AI modeling was based on an ensemble model of Inception-v4 and ResNet. The BCVA training/validation dataset was used for model training. The model performance was evaluated using the test dataset. Analysis of absolute error (AE) was performed by comparing the difference between true preoperative BCVA and estimated preoperative BCVA, as ≥0.1 logMAR (AE≥0.1) or <0.1 logMAR (AE <0.1). AE≥0.1 was classified into underestimation and overestimation groups based on the logMAR scale. Outcome Measurements Mean absolute error (MAE), root mean square error (RMSE), mean percentage error (MPE), and correlation coefficient between true preoperative BCVA and estimated preoperative BCVA. Results The test dataset MAE, RMSE, and MPE were 0.050 ± 0.130 logMAR, 0.140 ± 0.134 logMAR, and 1.3 ± 13.9%, respectively. The correlation coefficient was 0.969 (p < 0.001). The percentage of cases with AE≥0.1 was 8.4%. The incidence of postoperative BCVA > 0.1 was 21.4% in the AE≥0.1 group, of which 88.9% were in the underestimation group. The incidence of vision-impairing disease in the underestimation group was 95.7%. Preoperative corneal astigmatism and lens thickness were higher, and nucleus cataract was more severe (p < 0.001, 0.007, and 0.024, respectively) in AE≥0.1 than that in AE <0.1. The longer the axial length and the more severe the cortical/posterior subcapsular opacity, the better the estimated BCVA than the true BCVA. Conclusions The AI model achieved high-level visual acuity estimation in patients with senile cataract. This quantification method encompassed both visual acuity and cataract severity of OCT image, which are the main indications for cataract surgery, showing the potential to objectively evaluate cataract severity.
Collapse
Affiliation(s)
- Hyunmin Ahn
- Department of Ophthalmology, Institute of Vision Research, Yonsei University College of Medicine, Seoul, South Korea
| | - Ikhyun Jun
- Department of Ophthalmology, Institute of Vision Research, Yonsei University College of Medicine, Seoul, South Korea.,Corneal Dystrophy Research Institute, Yonsei University College of Medicine, Seoul, South Korea
| | - Kyoung Yul Seo
- Department of Ophthalmology, Institute of Vision Research, Yonsei University College of Medicine, Seoul, South Korea
| | - Eung Kweon Kim
- Corneal Dystrophy Research Institute, Yonsei University College of Medicine, Seoul, South Korea.,Saevit Eye Hospital, Goyang, South Korea
| | - Tae-Im Kim
- Department of Ophthalmology, Institute of Vision Research, Yonsei University College of Medicine, Seoul, South Korea.,Corneal Dystrophy Research Institute, Yonsei University College of Medicine, Seoul, South Korea
| |
Collapse
|
24
|
Artificial intelligence for renal cancer: From imaging to histology and beyond. Asian J Urol 2022; 9:243-252. [PMID: 36035341 PMCID: PMC9399557 DOI: 10.1016/j.ajur.2022.05.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 04/07/2022] [Accepted: 05/07/2022] [Indexed: 12/24/2022] Open
Abstract
Artificial intelligence (AI) has made considerable progress within the last decade and is the subject of contemporary literature. This trend is driven by improved computational abilities and increasing amounts of complex data that allow for new approaches in analysis and interpretation. Renal cell carcinoma (RCC) has a rising incidence since most tumors are now detected at an earlier stage due to improved imaging. This creates considerable challenges as approximately 10%–17% of kidney tumors are designated as benign in histopathological evaluation; however, certain co-morbid populations (the obese and elderly) have an increased peri-interventional risk. AI offers an alternative solution by helping to optimize precision and guidance for diagnostic and therapeutic decisions. The narrative review introduced basic principles and provide a comprehensive overview of current AI techniques for RCC. Currently, AI applications can be found in any aspect of RCC management including diagnostics, perioperative care, pathology, and follow-up. Most commonly applied models include neural networks, random forest, support vector machines, and regression. However, for implementation in daily practice, health care providers need to develop a basic understanding and establish interdisciplinary collaborations in order to standardize datasets, define meaningful endpoints, and unify interpretation.
Collapse
|
25
|
Wang L, Chang L, Luo R, Cui X, Liu H, Wu H, Chen Y, Zhang Y, Wu C, Li F, Liu H, Guan W, Wang D. An artificial intelligence system using maximum intensity projection MR images facilitates classification of non-mass enhancement breast lesions. Eur Radiol 2022; 32:4857-4867. [PMID: 35258676 DOI: 10.1007/s00330-022-08553-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 12/20/2021] [Accepted: 12/21/2021] [Indexed: 12/12/2022]
Abstract
OBJECTIVES To build an artificial intelligence (AI) system to classify benign and malignant non-mass enhancement (NME) lesions using maximum intensity projection (MIP) of early post-contrast subtracted breast MR images. METHODS This retrospective study collected 965 pure NME lesions (539 benign and 426 malignant) confirmed by histopathology or follow-up in 903 women. The 754 NME lesions acquired by one MR scanner were randomly split into the training set, validation set, and test set A (482/121/151 lesions). The 211 NME lesions acquired by another MR scanner were used as test set B. The AI system was developed using ResNet-50 with the axial and sagittal MIP images. One senior and one junior radiologist reviewed the MIP images of each case independently and rated its Breast Imaging Reporting and Data System category. The performance of the AI system and the radiologists was evaluated using the area under the receiver operating characteristic curve (AUC). RESULTS The AI system yielded AUCs of 0.859 and 0.816 in the test sets A and B, respectively. The AI system achieved comparable performance as the senior radiologist (p = 0.558, p = 0.041) and outperformed the junior radiologist (p < 0.001, p = 0.009) in both test sets A and B. After AI assistance, the AUC of the junior radiologist increased from 0.740 to 0.862 in test set A (p < 0.001) and from 0.732 to 0.843 in test set B (p < 0.001). CONCLUSION Our MIP-based AI system yielded good applicability in classifying NME lesions in breast MRI and can assist the junior radiologist achieve better performance. KEY POINTS • Our MIP-based AI system yielded good applicability in the dataset both from the same and a different MR scanner in predicting malignant NME lesions. • The AI system achieved comparable diagnostic performance with the senior radiologist and outperformed the junior radiologist. • This AI system can assist the junior radiologist achieve better performance in the classification of NME lesions in MRI.
Collapse
Affiliation(s)
- Lijun Wang
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China
| | - Lufan Chang
- Department of Research & Development, Yizhun Medical AI Co. Ltd., Beijing, China
| | - Ran Luo
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China
| | - Xuee Cui
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China
| | - Huanhuan Liu
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China
| | - Haoting Wu
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China
| | - Yanhong Chen
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China
| | - Yuzhen Zhang
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China
| | - Chenqing Wu
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China
| | - Fangzhen Li
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China
| | - Hao Liu
- Department of Research & Development, Yizhun Medical AI Co. Ltd., Beijing, China
| | - Wenbin Guan
- Department of Pathology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China
| | - Dengbin Wang
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China.
| |
Collapse
|
26
|
Caruso D, Polici M, Lauri C, Laghi A. Radiomics and artificial intelligence. Nucl Med Mol Imaging 2022. [DOI: 10.1016/b978-0-12-822960-6.00072-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
|
27
|
Kim J, Kim HJ, Kim C, Lee JH, Kim KW, Park YM, Kim HW, Ki SY, Kim YM, Kim WH. Weakly-supervised deep learning for ultrasound diagnosis of breast cancer. Sci Rep 2021; 11:24382. [PMID: 34934144 PMCID: PMC8692405 DOI: 10.1038/s41598-021-03806-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 11/30/2021] [Indexed: 11/21/2022] Open
Abstract
Conventional deep learning (DL) algorithm requires full supervision of annotating the region of interest (ROI) that is laborious and often biased. We aimed to develop a weakly-supervised DL algorithm that diagnosis breast cancer at ultrasound without image annotation. Weakly-supervised DL algorithms were implemented with three networks (VGG16, ResNet34, and GoogLeNet) and trained using 1000 unannotated US images (500 benign and 500 malignant masses). Two sets of 200 images (100 benign and 100 malignant masses) were used for internal and external validation sets. For comparison with fully-supervised algorithms, ROI annotation was performed manually and automatically. Diagnostic performances were calculated as the area under the receiver operating characteristic curve (AUC). Using the class activation map, we determined how accurately the weakly-supervised DL algorithms localized the breast masses. For internal validation sets, the weakly-supervised DL algorithms achieved excellent diagnostic performances, with AUC values of 0.92–0.96, which were not statistically different (all Ps > 0.05) from those of fully-supervised DL algorithms with either manual or automated ROI annotation (AUC, 0.92–0.96). For external validation sets, the weakly-supervised DL algorithms achieved AUC values of 0.86–0.90, which were not statistically different (Ps > 0.05) or higher (P = 0.04, VGG16 with automated ROI annotation) from those of fully-supervised DL algorithms (AUC, 0.84–0.92). In internal and external validation sets, weakly-supervised algorithms could localize 100% of malignant masses, except for ResNet34 (98%). The weakly-supervised DL algorithms developed in the present study were feasible for US diagnosis of breast cancer with well-performing localization and differential diagnosis.
Collapse
Affiliation(s)
- Jaeil Kim
- School of Computer Science and Engineering, Kyungpook National University, Daegu, Republic of Korea
| | - Hye Jung Kim
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Chilgok Hospital, Daegu, Republic of Korea
| | - Chanho Kim
- School of Computer Science and Engineering, Kyungpook National University, Daegu, Republic of Korea
| | - Jin Hwa Lee
- Department of Radiology, Dong-A University College of Medicine, Busan, Republic of Korea
| | - Keum Won Kim
- Departments of Radiology, School of Medicine, Konyang University, Konyang Univeristy Hospital, Daejeon, Republic of Korea
| | - Young Mi Park
- Department of Radiology, School of Medicine, Inje University, Busan Paik Hospital, Busan, Republic of Korea
| | - Hye Won Kim
- Department of Radiology, Wonkwang University Hospital, Wonkwang University School of Medicine, Iksan, Republic of Korea
| | - So Yeon Ki
- Department of Radiology, School of Medicine, Chonnam National University, Chonnam National University Hwasun Hospital, Hwasun, Republic of Korea
| | - You Me Kim
- Department of Radiology, School of Medicine, Dankook University, Dankook University Hospital, Cheonan, Republic of Korea
| | - Won Hwa Kim
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Chilgok Hospital, Daegu, Republic of Korea.
| |
Collapse
|
28
|
Mansour S, Kamal R, Hashem L, AlKalaawy B. Can artificial intelligence replace ultrasound as a complementary tool to mammogram for the diagnosis of the breast cancer? Br J Radiol 2021; 94:20210820. [PMID: 34613796 DOI: 10.1259/bjr.20210820] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
Abstract
OBJECTIVE To study the impact of artificial intelligence (AI) on the performance of mammogram with regard to the classification of the detected breast lesions in correlation to ultrasound-aided mammograms. METHODS Ethics committee approval was obtained in this prospective analysis. The study included 2000 mammograms. The mammograms were interpreted by the radiologists and breast ultrasound was performed for all cases. The Breast Imaging Reporting and Data System (BI-RADS) score was applied regarding the combined evaluation of the mammogram and the ultrasound modalities. Each breast side was individually assessed with the aid of AI scanning in the form of targeted heat-map and then, a probability of malignancy (abnormality scoring percentage) was obtained. Operative and the histopathology data were the standard of reference. RESULTS Normal assigned cases (BI-RADS 1) with no lesions were excluded from the statistical evaluation. The study included 538 benign and 642 malignant breast lesions (n = 1180, 59%). BI-RADS categories for the breast lesions with regard to the combined evaluation of the digital mammogram and ultrasound were assigned BI-RADS 2 (Benign) in 385 lesions with AI median value of the abnormality scoring percentage of 10 (n = 385/1180, 32.6%), and BI-RADS 5 (malignant) in 471, that had showed median percentage AI value of 88 (n = 471/1180, 39.9%). AI abnormality scoring of 59% yielded a sensitivity of 96.8% and specificity of 90.1% in the discrimination of the breast lesions detected on the included mammograms. CONCLUSION AI could be considered as an optional primary reliable complementary tool to the digital mammogram for the evaluation of the breast lesions. The color hue and the abnormality scoring percentage presented a credible method for the detection and discrimination of breast cancer of near accuracy to the breast ultrasound. So consequently, AI- mammogram combination could be used as a one setting method to discriminate between cases that require further imaging or biopsy from those that need only time interval follows up. ADVANCES IN KNOWLEDGE Recently, the indulgence of AI in the work-up of breast cancer was concerned. AI noted as a screening strategy for the detection of breast cancer. In the current work, the performance of AI was studied with regard to the diagnosis not just the detection of breast cancer in the mammographic-detected breast lesions. The evaluation was concerned with AI as a possible complementary reading tool to mammogram and included the qualitative assessment of the color hue and the quantitative integration of the abnormality scoring percentage.
Collapse
Affiliation(s)
- Sahar Mansour
- Women's Imaging Unit - Kasr El Ainy Hospital- Cairo University, Giza, Egypt.,Department of Radiology, Baheya Foundation for Early Detection and Treatment of Breast Cancer, Giza, Egypt
| | - Rasha Kamal
- Women's Imaging Unit - Kasr El Ainy Hospital- Cairo University, Giza, Egypt.,Department of Radiology, Baheya Foundation for Early Detection and Treatment of Breast Cancer, Giza, Egypt
| | - Lamiaa Hashem
- Women's Imaging Unit - Kasr El Ainy Hospital- Cairo University, Giza, Egypt.,Department of Radiology, Baheya Foundation for Early Detection and Treatment of Breast Cancer, Giza, Egypt
| | - Basma AlKalaawy
- Women's Imaging Unit - Kasr El Ainy Hospital- Cairo University, Giza, Egypt.,Department of Radiology, Baheya Foundation for Early Detection and Treatment of Breast Cancer, Giza, Egypt
| |
Collapse
|
29
|
Wang S, Sun Y, Mao N, Duan S, Li Q, Li R, Jiang T, Wang Z, Xie H, Gu Y. Incorporating the clinical and radiomics features of contrast-enhanced mammography to classify breast lesions: a retrospective study. Quant Imaging Med Surg 2021; 11:4418-4430. [PMID: 34603996 DOI: 10.21037/qims-21-103] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 05/11/2021] [Indexed: 12/21/2022]
Abstract
Background Contrast-enhanced mammography (CEM) is a promising breast imaging technique. A limited number of studies have focused on the radiomics analysis of CEM. We intended to explore whether a model constructed with both clinical and radiomics features of CEM can better classify benign and malignant breast lesions. Methods This retrospective, double-center study included women who underwent CEM between August 2017 and February 2020. The data from Center 1 were used as training set and the data from Center 2 were used as external testing set (training: testing =2:1). Models were constructed with the clinical, radiomics, and clinical + radiomics features of CEM. The clinical features included patient age and clinical image features interpreted by the radiologists. The radiomics features were extracted from high-energy (HE), low-energy (LE), and dual-energy subtraction (DES) images of CEM. The Mann-Whitney U test, Pearson correlation and Boruta's approach were used to select the radiomics features. Random Forest (RF) and logistic regression were used to establish the models. For the testing set, the areas under the curve (AUCs) and 95% confidence intervals (CIs) were employed to evaluate the performance of the models. For the training set, the mean AUCs were obtained by performing internal validation for 100 iterations and then compared by the Kruskal-Wallis and Mann-Whitney U tests. Results A total of 226 women (mean age: 47.4±10.1 years) with 226 pathologically proven breast lesions (101 benign; 125 malignant) were included. For the external testing set, the AUCs were 0.964 (95% CI: 0.918-1.000) for the combined model, 0.947 (95% CI: 0.891-0.997) for the radiomics model, and 0.882 (95% CI: 0.803-0.962) for the clinical model. In the internal validation process, the combined model achieved a mean AUC of 0.934±0.030, which was significantly higher than those of the radiomics (mean AUC =0.921±0.031, adjusted P<0.050) and clinical models (mean AUC =0.907±0.036; adjusted P<0.050). Conclusions Incorporating both clinical and radiomics features of CEM may achieve better classification results for breast lesions.
Collapse
Affiliation(s)
- Simin Wang
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Yuqi Sun
- Department of Biostatistics, School of Public Health, Fudan University, Shanghai, China
| | - Ning Mao
- Department of Radiology, Yantai Yuhuangding Hospital, Qingdao University, Qingdao, China
| | | | - Qin Li
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Ruimin Li
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Tingting Jiang
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Zhongyi Wang
- Department of Radiology, Yantai Yuhuangding Hospital, Qingdao University, Qingdao, China
| | - Haizhu Xie
- Department of Radiology, Yantai Yuhuangding Hospital, Qingdao University, Qingdao, China
| | - Yajia Gu
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| |
Collapse
|
30
|
O'Shea RJ, Sharkey AR, Cook GJR, Goh V. Systematic review of research design and reporting of imaging studies applying convolutional neural networks for radiological cancer diagnosis. Eur Radiol 2021; 31:7969-7983. [PMID: 33860829 PMCID: PMC8452579 DOI: 10.1007/s00330-021-07881-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Revised: 02/24/2021] [Accepted: 03/12/2021] [Indexed: 11/05/2022]
Abstract
OBJECTIVES To perform a systematic review of design and reporting of imaging studies applying convolutional neural network models for radiological cancer diagnosis. METHODS A comprehensive search of PUBMED, EMBASE, MEDLINE and SCOPUS was performed for published studies applying convolutional neural network models to radiological cancer diagnosis from January 1, 2016, to August 1, 2020. Two independent reviewers measured compliance with the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). Compliance was defined as the proportion of applicable CLAIM items satisfied. RESULTS One hundred eighty-six of 655 screened studies were included. Many studies did not meet the criteria for current design and reporting guidelines. Twenty-seven percent of studies documented eligibility criteria for their data (50/186, 95% CI 21-34%), 31% reported demographics for their study population (58/186, 95% CI 25-39%) and 49% of studies assessed model performance on test data partitions (91/186, 95% CI 42-57%). Median CLAIM compliance was 0.40 (IQR 0.33-0.49). Compliance correlated positively with publication year (ρ = 0.15, p = .04) and journal H-index (ρ = 0.27, p < .001). Clinical journals demonstrated higher mean compliance than technical journals (0.44 vs. 0.37, p < .001). CONCLUSIONS Our findings highlight opportunities for improved design and reporting of convolutional neural network research for radiological cancer diagnosis. KEY POINTS • Imaging studies applying convolutional neural networks (CNNs) for cancer diagnosis frequently omit key clinical information including eligibility criteria and population demographics. • Fewer than half of imaging studies assessed model performance on explicitly unobserved test data partitions. • Design and reporting standards have improved in CNN research for radiological cancer diagnosis, though many opportunities remain for further progress.
Collapse
Affiliation(s)
- Robert J O'Shea
- Cancer Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, 5th floor, Becket House, 1 Lambeth Palace Road, London, SE1 7EU, UK.
| | - Amy Rose Sharkey
- Cancer Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, 5th floor, Becket House, 1 Lambeth Palace Road, London, SE1 7EU, UK
- Department of Radiology, Guy's & St Thomas' NHS Foundation Trust, London, UK
| | - Gary J R Cook
- Cancer Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, 5th floor, Becket House, 1 Lambeth Palace Road, London, SE1 7EU, UK
- King's College London & Guy's and St. Thomas' PET Centre, London, UK
| | - Vicky Goh
- Cancer Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, 5th floor, Becket House, 1 Lambeth Palace Road, London, SE1 7EU, UK
- Department of Radiology, Guy's & St Thomas' NHS Foundation Trust, London, UK
| |
Collapse
|
31
|
Coppola F, Faggioni L, Gabelloni M, De Vietro F, Mendola V, Cattabriga A, Cocozza MA, Vara G, Piccinino A, Lo Monaco S, Pastore LV, Mottola M, Malavasi S, Bevilacqua A, Neri E, Golfieri R. Human, All Too Human? An All-Around Appraisal of the "Artificial Intelligence Revolution" in Medical Imaging. Front Psychol 2021; 12:710982. [PMID: 34650476 PMCID: PMC8505993 DOI: 10.3389/fpsyg.2021.710982] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 09/02/2021] [Indexed: 12/22/2022] Open
Abstract
Artificial intelligence (AI) has seen dramatic growth over the past decade, evolving from a niche super specialty computer application into a powerful tool which has revolutionized many areas of our professional and daily lives, and the potential of which seems to be still largely untapped. The field of medicine and medical imaging, as one of its various specialties, has gained considerable benefit from AI, including improved diagnostic accuracy and the possibility of predicting individual patient outcomes and options of more personalized treatment. It should be noted that this process can actively support the ongoing development of advanced, highly specific treatment strategies (e.g., target therapies for cancer patients) while enabling faster workflow and more efficient use of healthcare resources. The potential advantages of AI over conventional methods have made it attractive for physicians and other healthcare stakeholders, raising much interest in both the research and the industry communities. However, the fast development of AI has unveiled its potential for disrupting the work of healthcare professionals, spawning concerns among radiologists that, in the future, AI may outperform them, thus damaging their reputations or putting their jobs at risk. Furthermore, this development has raised relevant psychological, ethical, and medico-legal issues which need to be addressed for AI to be considered fully capable of patient management. The aim of this review is to provide a brief, hopefully exhaustive, overview of the state of the art of AI systems regarding medical imaging, with a special focus on how AI and the entire healthcare environment should be prepared to accomplish the goal of a more advanced human-centered world.
Collapse
Affiliation(s)
- Francesca Coppola
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
- SIRM Foundation, Italian Society of Medical and Interventional Radiology, Milan, Italy
| | - Lorenzo Faggioni
- Academic Radiology, Department of Translational Research, University of Pisa, Pisa, Italy
| | - Michela Gabelloni
- Academic Radiology, Department of Translational Research, University of Pisa, Pisa, Italy
| | - Fabrizio De Vietro
- Academic Radiology, Department of Translational Research, University of Pisa, Pisa, Italy
| | - Vincenzo Mendola
- Academic Radiology, Department of Translational Research, University of Pisa, Pisa, Italy
| | - Arrigo Cattabriga
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| | - Maria Adriana Cocozza
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| | - Giulio Vara
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| | - Alberto Piccinino
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| | - Silvia Lo Monaco
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| | - Luigi Vincenzo Pastore
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| | - Margherita Mottola
- Department of Computer Science and Engineering, University of Bologna, Bologna, Italy
| | - Silvia Malavasi
- Department of Computer Science and Engineering, University of Bologna, Bologna, Italy
| | - Alessandro Bevilacqua
- Department of Computer Science and Engineering, University of Bologna, Bologna, Italy
| | - Emanuele Neri
- SIRM Foundation, Italian Society of Medical and Interventional Radiology, Milan, Italy
- Academic Radiology, Department of Translational Research, University of Pisa, Pisa, Italy
| | - Rita Golfieri
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| |
Collapse
|
32
|
Lei YM, Yin M, Yu MH, Yu J, Zeng SE, Lv WZ, Li J, Ye HR, Cui XW, Dietrich CF. Artificial Intelligence in Medical Imaging of the Breast. Front Oncol 2021; 11:600557. [PMID: 34367938 PMCID: PMC8339920 DOI: 10.3389/fonc.2021.600557] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Accepted: 07/07/2021] [Indexed: 12/24/2022] Open
Abstract
Artificial intelligence (AI) has invaded our daily lives, and in the last decade, there have been very promising applications of AI in the field of medicine, including medical imaging, in vitro diagnosis, intelligent rehabilitation, and prognosis. Breast cancer is one of the common malignant tumors in women and seriously threatens women’s physical and mental health. Early screening for breast cancer via mammography, ultrasound and magnetic resonance imaging (MRI) can significantly improve the prognosis of patients. AI has shown excellent performance in image recognition tasks and has been widely studied in breast cancer screening. This paper introduces the background of AI and its application in breast medical imaging (mammography, ultrasound and MRI), such as in the identification, segmentation and classification of lesions; breast density assessment; and breast cancer risk assessment. In addition, we also discuss the challenges and future perspectives of the application of AI in medical imaging of the breast.
Collapse
Affiliation(s)
- Yu-Meng Lei
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Miao Yin
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Mei-Hui Yu
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Jing Yu
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Shu-E Zeng
- Department of Medical Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Wen-Zhi Lv
- Department of Artificial Intelligence, Julei Technology, Wuhan, China
| | - Jun Li
- Department of Medical Ultrasound, The First Affiliated Hospital of Medical College, Shihezi University, Xinjiang, China
| | - Hua-Rong Ye
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Christoph F Dietrich
- Department Allgemeine Innere Medizin (DAIM), Kliniken Beau Site, Salem und Permanence, Bern, Switzerland
| |
Collapse
|
33
|
Zhang M, Zhu C, Wang Y, Kong Z, Hua Y, Zhang W, Si X, Ye B, Xu X, Li L, Heng D, Liu B, Tian S, Wu J, Dang Y, Zhang G. Differential diagnosis for esophageal protruded lesions using a deep convolution neural network in endoscopic images. Gastrointest Endosc 2021; 93:1261-1272.e2. [PMID: 33065026 DOI: 10.1016/j.gie.2020.10.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Accepted: 10/01/2020] [Indexed: 12/11/2022]
Abstract
BACKGROUND AND AIMS Recent advances in deep convolutional neural networks (CNNs) have led to remarkable results in digestive endoscopy. In this study, we aimed to develop CNN-based models for the differential diagnosis of benign esophageal protruded lesions using endoscopic images acquired during real clinical settings. METHODS We retrospectively reviewed the images from 1217 patients who underwent white-light endoscopy (WLE) and EUS between January 2015 and April 2020. Three deep CNN models were developed to accomplish the following tasks: (1) identification of esophageal benign lesions from healthy controls using WLE images; (2) differentiation of 3 subtypes of esophageal protruded lesions (including esophageal leiomyoma [EL], esophageal cyst (EC], and esophageal papilloma [EP]) using WLE images; and (3) discrimination between EL and EC using EUS images. Six endoscopists blinded to the patients' clinical status were enrolled to interpret all images independently. Their diagnostic performances were evaluated and compared with the CNN models using the area under the receiver operating characteristic curve (AUC). RESULTS For task 1, the CNN model achieved an AUC of 0.751 (95% confidence interval [CI], 0.652-0.850) in identifying benign esophageal lesions. For task 2, the proposed model using WLE images for differentiation of esophageal protruded lesions achieved an AUC of 0.907 (95% CI, 0.835-0.979), 0.897 (95% CI, 0.841-0.953), and 0.868 (95% CI, 0.769-0.968) for EP, EL, and EC, respectively. The CNN model achieved equivalent or higher identification accuracy for EL and EC compared with skilled endoscopists. In the task of discriminating EL from EC (task 3), the proposed CNN model had AUC values of 0.739 (EL, 95% CI, 0.600-0.878) and 0.724 (EC, 95% CI, 0.567-0.881), which outperformed seniors and novices. Attempts to combine the CNN and endoscopist predictions led to significantly improved diagnostic accuracy compared with endoscopists interpretations alone. CONCLUSIONS Our team established CNN-based methodologies to recognize benign esophageal protruded lesions using routinely obtained WLE and EUS images. Preliminary results combining the results from the models and the endoscopists underscored the potential of ensemble models for improved differentiation of lesions in real endoscopic settings.
Collapse
Affiliation(s)
- Min Zhang
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Chang Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Yun Wang
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Zihao Kong
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Yifei Hua
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Weifeng Zhang
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Xinmin Si
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Bixing Ye
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Xiaobing Xu
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Lurong Li
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Ding Heng
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | | | | | | | - Yini Dang
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Guoxin Zhang
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
34
|
Batchu S, Liu F, Amireh A, Waller J, Umair M. A Review of Applications of Machine Learning in Mammography and Future Challenges. Oncology 2021; 99:483-490. [PMID: 34023831 DOI: 10.1159/000515698] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2021] [Accepted: 03/05/2021] [Indexed: 11/19/2022]
Abstract
BACKGROUND The aim of this study is to systematically review the literature to summarize the evidence surrounding the clinical utility of artificial intelligence (AI) in the field of mammography. Databases from PubMed, IEEE Xplore, and Scopus were searched for relevant literature. Studies evaluating AI models in the context of prediction and diagnosis of breast malignancies that also reported conventional performance metrics were deemed suitable for inclusion. From 90 unique citations, 21 studies were considered suitable for our examination. Data was not pooled due to heterogeneity in study evaluation methods. SUMMARY Three studies showed the applicability of AI in reducing workload. Six studies demonstrated that AI can aid in diagnosis, with up to 69% reduction in false positives and an increase in sensitivity ranging from 84 to 91%. Five studies show how AI models can independently mark and classify suspicious findings on conventional scans, with abilities comparable with radiologists. Seven studies examined AI predictive potential for breast cancer and risk score calculation. Key Messages: Despite limitations in the current evidence base and technical obstacles, this review suggests AI has marked potential for extensive use in mammography. Additional works, including large-scale prospective studies, are warranted to elucidate the clinical utility of AI.
Collapse
Affiliation(s)
- Sai Batchu
- Cooper Medical School of Rowan University, Camden, New Jersey, USA
| | - Fan Liu
- Stanford University School of Medicine, Stanford, California, USA
| | - Ahmad Amireh
- Duke University Medical Center, Durham, North Carolina, USA
| | - Joseph Waller
- Drexel University College of Medicine, Philadelphia, Pennsylvania, USA
| | - Muhammad Umair
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| |
Collapse
|
35
|
Yoon JH, Kim EK. Deep Learning-Based Artificial Intelligence for Mammography. Korean J Radiol 2021; 22:1225-1239. [PMID: 33987993 PMCID: PMC8316774 DOI: 10.3348/kjr.2020.1210] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 01/11/2021] [Accepted: 01/17/2021] [Indexed: 12/27/2022] Open
Abstract
During the past decade, researchers have investigated the use of computer-aided mammography interpretation. With the application of deep learning technology, artificial intelligence (AI)-based algorithms for mammography have shown promising results in the quantitative assessment of parenchymal density, detection and diagnosis of breast cancer, and prediction of breast cancer risk, enabling more precise patient management. AI-based algorithms may also enhance the efficiency of the interpretation workflow by reducing both the workload and interpretation time. However, more in-depth investigation is required to conclusively prove the effectiveness of AI-based algorithms. This review article discusses how AI algorithms can be applied to mammography interpretation as well as the current challenges in its implementation in real-world practice.
Collapse
Affiliation(s)
- Jung Hyun Yoon
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Seoul, Korea
| | - Eun Kyung Kim
- Department of Radiology, Yongin Severance Hospital, Yonsei University, College of Medicine, Yongin, Korea.
| |
Collapse
|
36
|
Li J, Bu Y, Lu S, Pang H, Luo C, Liu Y, Qian L. Development of a Deep Learning-Based Model for Diagnosing Breast Nodules With Ultrasound. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2021; 40:513-520. [PMID: 32770574 DOI: 10.1002/jum.15427] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 06/01/2020] [Accepted: 06/07/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVES Artificial intelligence (AI) has been an important addition to medicine. We aimed to explore the use of deep learning (DL) to distinguish benign from malignant lesions with breast ultrasound (BUS). METHODS The DL model was trained with BUS nodule data using a standard protocol (1271 malignant nodules, 1053 benign nodules, and 2144 images of the contralateral normal breast). The model was tested with 692 images of 256 breast nodules. We used the accuracy, precision, recall, harmonic mean of recall and precision, and mean average precision as the indices to assess the DL model. We used 100 BUS images to evaluate differences in diagnostic accuracy among the AI system, experts (>25 years of experience), and physicians with varying levels of experience. A receiver operating characteristic curve was generated to evaluate the accuracy for distinguishing between benign and malignant breast nodules. RESULTS The DL model showed 73.3% sensitivity and 94.9% specificity for the diagnosis of benign versus malignant breast nodules (area under the curve, 0.943). No significant difference in diagnostic ability was found between the AI system and the expert group (P = .951), although the physicians with lower levels of experience showed significant differences from the AI and expert groups (P = .01 and .03, respectively). CONCLUSIONS Deep learning could distinguish between benign and malignant breast nodules with BUS. On BUS images, DL achieved diagnostic accuracy equivalent to that of expert physicians.
Collapse
Affiliation(s)
- Jianming Li
- Department of Ultrasound, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Yunyun Bu
- Department of Ultrasound, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Shuqiang Lu
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
| | - Hao Pang
- School of Software, Beijing University of Posts and Telecommunications, Beijing, China
| | - Chang Luo
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
| | - Yujiang Liu
- Department of Ultrasound, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Linxue Qian
- Department of Ultrasound, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
37
|
Goyal S. An Overview of Current Trends, Techniques, Prospects, and Pitfalls of Artificial Intelligence in Breast Imaging. REPORTS IN MEDICAL IMAGING 2021. [DOI: 10.2147/rmi.s295205] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
|
38
|
[Artificial intelligence in breast imaging : Areas of application from a clinical perspective]. Radiologe 2021; 61:192-198. [PMID: 33507318 PMCID: PMC7851036 DOI: 10.1007/s00117-020-00802-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/21/2020] [Indexed: 12/22/2022]
Abstract
Klinisches/methodisches Problem Bei der Mammadiagnostik gilt es, klinische sowie multimodal bildgebende Informationen mit perkutanen und operativen Eingriffen zu koordinieren. Aus dieser Komplexität entsteht eine Reihe von Problemen: übersehene Karzinome, Überdiagnose, falsch-positive Befunde, unnötige weiterführende Bildgebung, Biopsien und Operationen. Radiologische Standardverfahren Folgende Untersuchungsverfahren werden in der Mammadiagnostik eingesetzt: Röntgenmammographie, Tomosynthese, kontrastangehobene Mammographie, (multiparametrischer) Ultraschall, Magnetresonanztomographie, Computertomographie, nuklearmedizinische Verfahren sowie deren Hybridvarianten. Methodische Innovationen Künstliche Intelligenz (KI) verspricht Abhilfe bei praktisch allen Problemen der Mammadiagnostik. Potenziell lassen sich Fehlbefunde vermeiden, bildgebende Verfahren effizienter einsetzen und möglicherweise auch biologische Phänotypen von Mammakarzinomen definieren. Leistungsfähigkeit Auf KI basierende Software wird für zahlreiche Anwendungen entwickelt. Am weitesten fortgeschritten sind Systeme für das Screening mittels Mammographie. Probleme sind monozentrische sowie kurzfristig am finanziellen Erfolg orientierte Ansätze. Bewertung Künstliche Intelligenz (KI) verspricht eine Verbesserung der Mammadiagnostik. Durch die Vereinfachung von Abläufen, die Reduktion monotoner und ergebnisloser Tätigkeiten und den Hinweis auf mögliche Fehler ist eine Beschleunigung von dann weitgehend fehlerfreien Abläufen denkbar. Empfehlung für die Praxis In diesem Beitrag werden die Anforderungen der Mammadiagnostik und mögliche Einsatzgebiete der der KI beleuchtet. Je nach Definition gibt es bereits praktisch anwendbare Softwaretools für die Mammadiagnostik. Globale Lösungen stehen allerdings noch aus.
Collapse
|
39
|
SyReNN: A Tool for Analyzing Deep Neural Networks. TOOLS AND ALGORITHMS FOR THE CONSTRUCTION AND ANALYSIS OF SYSTEMS 2021. [PMCID: PMC7984545 DOI: 10.1007/978-3-030-72013-1_15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Deep Neural Networks (DNNs) are rapidly gaining popularity in a variety of important domains. Formally, DNNs are complicated vector-valued functions which come in a variety of sizes and applications. Unfortunately, modern DNNs have been shown to be vulnerable to a variety of attacks and buggy behavior. This has motivated recent work in formally analyzing the properties of such DNNs. This paper introduces SyReNN, a tool for understanding and analyzing a DNN by computing its symbolic representation. The key insight is to decompose the DNN into linear functions. Our tool is designed for analyses using low-dimensional subsets of the input space, a unique design point in the space of DNN analysis tools. We describe the tool and the underlying theory, then evaluate its use and performance on three case studies: computing Integrated Gradients, visualizing a DNN’s decision boundaries, and patching a DNN.
Collapse
|
40
|
Fujioka T, Yashima Y, Oyama J, Mori M, Kubota K, Katsuta L, Kimura K, Yamaga E, Oda G, Nakagawa T, Kitazume Y, Tateishi U. Deep-learning approach with convolutional neural network for classification of maximum intensity projections of dynamic contrast-enhanced breast magnetic resonance imaging. Magn Reson Imaging 2020; 75:1-8. [PMID: 33045323 DOI: 10.1016/j.mri.2020.10.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 08/27/2020] [Accepted: 10/06/2020] [Indexed: 02/05/2023]
Abstract
PURPOSE We aimed to evaluate deep learning approach with convolutional neural networks (CNNs) to discriminate between benign and malignant lesions on maximum intensity projections of dynamic contrast-enhanced breast magnetic resonance imaging (MRI). METHODS We retrospectively gathered maximum intensity projections of dynamic contrast-enhanced breast MRI of 106 benign (including 22 normal) and 180 malignant cases for training and validation data. CNN models were constructed to calculate the probability of malignancy using CNN architectures (DenseNet121, DenseNet169, InceptionResNetV2, InceptionV3, NasNetMobile, and Xception) with 500 epochs and analyzed that of 25 benign (including 12 normal) and 47 malignant cases for test data. Two human readers also interpreted these test data and scored the probability of malignancy for each case using Breast Imaging Reporting and Data System. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) were calculated. RESULTS The CNN models showed a mean AUC of 0.830 (range, 0.750-0.895). The best model was InceptionResNetV2. This model, Reader 1, and Reader 2 had sensitivities of 74.5%, 72.3%, and 78.7%; specificities of 96.0%, 88.0%, and 80.0%; and AUCs of 0.895, 0.823, and 0.849, respectively. No significant difference arose between the CNN models and human readers (p > 0.125). CONCLUSION Our CNN models showed comparable diagnostic performance in differentiating between benign and malignant lesions to human readers on maximum intensity projection of dynamic contrast-enhanced breast MRI.
Collapse
Affiliation(s)
- Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yuka Yashima
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Jun Oyama
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Mio Mori
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan.
| | - Kazunori Kubota
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan; Department of Radiology, Dokkyo Medical University, Tochigi, Japan
| | - Leona Katsuta
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Koichiro Kimura
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Emi Yamaga
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Goshi Oda
- Department of Surgery, Breast Surgery, Tokyo Medical and Dental University, Tokyo, Japan
| | - Tsuyoshi Nakagawa
- Department of Surgery, Breast Surgery, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yoshio Kitazume
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Ukihide Tateishi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| |
Collapse
|
41
|
Chang Sen LQ, Mayo RC, Leung JW. Concerns about the economics of mammography and how radiologists can respond. Clin Imaging 2020; 66:84-86. [DOI: 10.1016/j.clinimag.2020.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Revised: 04/23/2020] [Accepted: 05/13/2020] [Indexed: 11/30/2022]
|
42
|
Yan M, Wang W. Development of a Radiomics Prediction Model for Histological Type Diagnosis in Solitary Pulmonary Nodules: The Combination of CT and FDG PET. Front Oncol 2020; 10:555514. [PMID: 33042839 PMCID: PMC7523028 DOI: 10.3389/fonc.2020.555514] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Accepted: 08/24/2020] [Indexed: 12/24/2022] Open
Abstract
PURPOSE To develop a diagnostic model for histological subtypes in lung cancer combined CT and FDG PET. METHODS Machine learning binary and four class classification of a cohort of 445 lung cancer patients who have CT and PET simultaneously. The outcomes to be predicted were primary, metastases (Mts), adenocarcinoma (Adc), and squamous cell carcinoma (Sqc). The classification method is a combination of machine learning and feature selection that is a Partition-Membership. The performance metrics include accuracy (Acc), precision (Pre), area under curve (AUC) and kappa statistics. RESULTS The combination of CT and PET radiomics (CPR) binary model showed more than 98% Acc and AUC on predicting Adc, Sqc, primary, and metastases, CPR four-class classification model showed 91% Acc and 0.89 Kappa. CONCLUSION The proposed CPR models can be used to obtain valid predictions of histological subtypes in lung cancer patients, assisting in diagnosis and shortening the time to diagnostic.
Collapse
Affiliation(s)
- Mengmeng Yan
- Urban Vocational College of Sichuan, Chengdu, China
- Sichuan Cancer Hospital & Institute, Chengdu, China
| | - Weidong Wang
- Department of Radiation Oncology, Sichuan Cancer Hospital & Institute, Chengdu, China
- Radiation Oncology Key Laboratory of Sichuan Province, Sichuan Cancer Hospital, Chengdu, China
| |
Collapse
|
43
|
Using Deep Learning with Convolutional Neural Network Approach to Identify the Invasion Depth of Endometrial Cancer in Myometrium Using MR Images: A Pilot Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17165993. [PMID: 32824765 PMCID: PMC7460520 DOI: 10.3390/ijerph17165993] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Revised: 08/09/2020] [Accepted: 08/12/2020] [Indexed: 12/21/2022]
Abstract
Myometrial invasion affects the prognosis of endometrial cancer. However, discrepancies exist between pre-operative magnetic resonance imaging staging and post-operative pathological staging. This study aims to validate the accuracy of artificial intelligence (AI) for detecting the depth of myometrial invasion using a deep learning technique on magnetic resonance images. We obtained 4896 contrast-enhanced T1-weighted images (T1w) and T2-weighted images (T2w) from 72 patients who were diagnosed with surgico-pathological stage I endometrial carcinoma. We used the images from 24 patients (33.3%) to train the AI. The images from the remaining 48 patients (66.7%) were used to evaluate the accuracy of the model. The AI then interpreted each of the cases and sorted them into stage IA or IB. Compared with the accuracy rate of radiologists’ diagnoses (77.8%), the accuracy rate of AI interpretation in contrast-enhanced T1w was higher (79.2%), whereas that in T2w was lower (70.8%). The diagnostic accuracy was not significantly different between radiologists and AI for both T1w and T2w. However, AI was more likely to provide incorrect interpretations in patients with coexisting benign leiomyomas or polypoid tumors. Currently, the ability of this AI technology to make an accurate diagnosis has limitations. However, in hospitals with limited resources, AI may be able to assist in reading magnetic resonance images. We believe that AI has the potential to assist radiologists or serve as a reasonable alternative for pre-operative evaluation of the myometrial invasion depth of stage I endometrial cancers.
Collapse
|
44
|
Bahl M. Artificial Intelligence: A Primer for Breast Imaging Radiologists. JOURNAL OF BREAST IMAGING 2020; 2:304-314. [PMID: 32803154 PMCID: PMC7418877 DOI: 10.1093/jbi/wbaa033] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Indexed: 12/14/2022]
Abstract
Artificial intelligence (AI) is a branch of computer science dedicated to developing computer algorithms that emulate intelligent human behavior. Subfields of AI include machine learning and deep learning. Advances in AI technologies have led to techniques that could increase breast cancer detection, improve clinical efficiency in breast imaging practices, and guide decision-making regarding screening and prevention strategies. This article reviews key terminology and concepts, discusses common AI models and methods to validate and evaluate these models, describes emerging AI applications in breast imaging, and outlines challenges and future directions. Familiarity with AI terminology, concepts, methods, and applications is essential for breast imaging radiologists to critically evaluate these emerging technologies, recognize their strengths and limitations, and ultimately ensure optimal patient care.
Collapse
Affiliation(s)
- Manisha Bahl
- Massachusetts General Hospital, Department of Radiology, Boston, MA
| |
Collapse
|
45
|
Fujioka T, Kubota K, Mori M, Kikuchi Y, Katsuta L, Kimura M, Yamaga E, Adachi M, Oda G, Nakagawa T, Kitazume Y, Tateishi U. Efficient Anomaly Detection with Generative Adversarial Network for Breast Ultrasound Imaging. Diagnostics (Basel) 2020; 10:diagnostics10070456. [PMID: 32635547 PMCID: PMC7400007 DOI: 10.3390/diagnostics10070456] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 07/01/2020] [Accepted: 07/02/2020] [Indexed: 12/11/2022] Open
Abstract
We aimed to use generative adversarial network (GAN)-based anomaly detection to diagnose images of normal tissue, benign masses, or malignant masses on breast ultrasound. We retrospectively collected 531 normal breast ultrasound images from 69 patients. Data augmentation was performed and 6372 (531 × 12) images were available for training. Efficient GAN-based anomaly detection was used to construct a computational model to detect anomalous lesions in images and calculate abnormalities as an anomaly score. Images of 51 normal tissues, 48 benign masses, and 72 malignant masses were analyzed for the test data. The sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) of this anomaly detection model were calculated. Malignant masses had significantly higher anomaly scores than benign masses (p < 0.001), and benign masses had significantly higher scores than normal tissues (p < 0.001). Our anomaly detection model had high sensitivities, specificities, and AUC values for distinguishing normal tissues from benign and malignant masses, with even greater values for distinguishing normal tissues from malignant masses. GAN-based anomaly detection shows high performance for the detection and diagnosis of anomalous lesions in breast ultrasound images.
Collapse
Affiliation(s)
- Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan; (K.K.); (M.M.); (Y.K.); (L.K.); (M.K.); (E.Y.); (Y.K.); (U.T.)
- Correspondence: ; Tel.: +81-3-5803-5311
| | - Kazunori Kubota
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan; (K.K.); (M.M.); (Y.K.); (L.K.); (M.K.); (E.Y.); (Y.K.); (U.T.)
- Department of Radiology, Dokkyo Medical University, 880 Kitakobayashi, Mibu, Shimotsugagun, Tochigi 321-0293, Japan
| | - Mio Mori
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan; (K.K.); (M.M.); (Y.K.); (L.K.); (M.K.); (E.Y.); (Y.K.); (U.T.)
| | - Yuka Kikuchi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan; (K.K.); (M.M.); (Y.K.); (L.K.); (M.K.); (E.Y.); (Y.K.); (U.T.)
| | - Leona Katsuta
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan; (K.K.); (M.M.); (Y.K.); (L.K.); (M.K.); (E.Y.); (Y.K.); (U.T.)
| | - Mizuki Kimura
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan; (K.K.); (M.M.); (Y.K.); (L.K.); (M.K.); (E.Y.); (Y.K.); (U.T.)
| | - Emi Yamaga
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan; (K.K.); (M.M.); (Y.K.); (L.K.); (M.K.); (E.Y.); (Y.K.); (U.T.)
| | - Mio Adachi
- Department of Surgery, Breast Surgery, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan; (M.A.); (G.O.); (T.N.)
| | - Goshi Oda
- Department of Surgery, Breast Surgery, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan; (M.A.); (G.O.); (T.N.)
| | - Tsuyoshi Nakagawa
- Department of Surgery, Breast Surgery, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan; (M.A.); (G.O.); (T.N.)
| | - Yoshio Kitazume
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan; (K.K.); (M.M.); (Y.K.); (L.K.); (M.K.); (E.Y.); (Y.K.); (U.T.)
| | - Ukihide Tateishi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan; (K.K.); (M.M.); (Y.K.); (L.K.); (M.K.); (E.Y.); (Y.K.); (U.T.)
| |
Collapse
|
46
|
Detection and Diagnosis of Breast Cancer Using Artificial Intelligence Based assessment of Maximum Intensity Projection Dynamic Contrast-Enhanced Magnetic Resonance Images. Diagnostics (Basel) 2020; 10:diagnostics10050330. [PMID: 32443922 PMCID: PMC7277981 DOI: 10.3390/diagnostics10050330] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 05/15/2020] [Accepted: 05/18/2020] [Indexed: 12/24/2022] Open
Abstract
We aimed to evaluate an artificial intelligence (AI) system that can detect and diagnose lesions of maximum intensity projection (MIP) in dynamic contrast-enhanced (DCE) breast magnetic resonance imaging (MRI). We retrospectively gathered MIPs of DCE breast MRI for training and validation data from 30 and 7 normal individuals, 49 and 20 benign cases, and 135 and 45 malignant cases, respectively. Breast lesions were indicated with a bounding box and labeled as benign or malignant by a radiologist, while the AI system was trained to detect and calculate possibilities of malignancy using RetinaNet. The AI system was analyzed using test sets of 13 normal, 20 benign, and 52 malignant cases. Four human readers also scored these test data with and without the assistance of the AI system for the possibility of a malignancy in each breast. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were 0.926, 0.828, and 0.925 for the AI system; 0.847, 0.841, and 0.884 for human readers without AI; and 0.889, 0.823, and 0.899 for human readers with AI using a cutoff value of 2%, respectively. The AI system showed better diagnostic performance compared to the human readers (p = 0.002), and because of the increased performance of human readers with the assistance of the AI system, the AUC of human readers was significantly higher with than without the AI system (p = 0.039). Our AI system showed a high performance ability in detecting and diagnosing lesions in MIPs of DCE breast MRI and increased the diagnostic performance of human readers.
Collapse
|
47
|
Yan M, Wang W. A Non-invasive Method to Diagnose Lung Adenocarcinoma. Front Oncol 2020; 10:602. [PMID: 32411600 PMCID: PMC7200977 DOI: 10.3389/fonc.2020.00602] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2020] [Accepted: 04/02/2020] [Indexed: 12/19/2022] Open
Abstract
Purpose: To find out the CT radiomics features of differentiating lung adenocarcinoma from another lung cancer histological type. Methods: This was a historical cohort study, three independent lung cancer cohorts included. One cohort was used to evaluate the stability of radiomics features, one cohort was used to feature selection, and the last was used to construct and evaluate classification models. The research is divided into four steps: region of interest segmentation, feature extraction, feature selection, and model building and validation. The feature selection methods included the intraclass correlation coefficient, ReliefF coefficient, and Partition-Membership filter. The performance metrics of the classification model included accuracy (Acc), precision (Pre), area under curve (AUC), and kappa statistics. Results: The 10 features (First order shape features: Sphericity and Compacity, Gray-Level Run Length Matrix: Short-Run Emphasis, Low Gray-level Run Emphasis, and High Gray-level Run Emphasis, Gray Level Co-occurrence Matrix: Homogeneity, Energy, Contrast, Correlation, and Dissimilarity) showed the most stable and classification capability. The 6 classifiers, Logistic regression classifier (LR), Sequence Minimum Optimization algorithm, Random Forest, KStar, Naive Bayes and Random Committee, have great performance both on the train and the test sets, and especially LR has the best performance on the test set (Acc = 98.72, Pre = 0.988, AUC = 1, and kappa = 0.974). Conclusion: Lung adenocarcinoma can be identified based on CT radiomics features. We can diagnose lung adenocarcinoma with CT non-invasively.
Collapse
Affiliation(s)
- Mengmeng Yan
- Urban Vocational College of Sichuan, Chengdu, China
- School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Weidong Wang
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, Chengdu, China
- Radiation Oncology Key Laboratory of Sichuan Province, Chengdu, China
| |
Collapse
|
48
|
Should We Ignore, Follow, or Biopsy? Impact of Artificial Intelligence Decision Support on Breast Ultrasound Lesion Assessment. AJR Am J Roentgenol 2020; 214:1445-1452. [PMID: 32319794 DOI: 10.2214/ajr.19.21872] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
OBJECTIVE. The objective of this study was to assess the impact of artificial intelligence (AI)-based decision support (DS) on breast ultrasound (US) lesion assessment. MATERIALS AND METHODS. A multicenter retrospective review of 900 breast lesions (470/900 [52.2%] benign; 430/900 [47.8%] malignant) on US by 15 physicians (11 radiologists, two surgeons, two obstetrician/gynecologists). An AI system (Koios DS for Breast, Koios Medical) evaluated images and assigned them to one of four categories: benign, probably benign, suspicious, and probably malignant. Each reader reviewed cases twice: 750 cases with US only or with US plus DS; 4 weeks later, cases were reviewed in the opposite format. One hundred fifty additional cases were presented identically in each session. DS and reader sensitivity, specificity, and positive likelihood ratios (PLRs) were calculated as well as reader AUCs with and without DS. The Kendall τ-b correlation coefficient was used to assess intraand interreader variability. RESULTS. Mean reader AUC for cases reviewed with US only was 0.83 (95% CI, 0.78-0.89); for cases reviewed with US plus DS, mean AUC was 0.87 (95% CI, 0.84-0.90). PLR for the DS system was 1.98 (95% CI, 1.78-2.18) and was higher than the PLR for all readers but one. Fourteen readers had better AUC with US plus DS than with US only. Mean Kendall τ-b for US-only interreader variability was 0.54 (95% CI, 0.53-0.55); for US plus DS, it was 0.68 (95% CI, 0.67-0.69). Intrareader variability improved with DS; class switching (defined as crossing from BI-RADS category 3 to BI-RADS category 4A or above) occurred in 13.6% of cases with US only versus 10.8% of cases with US plus DS (p = 0.04). CONCLUSION. AI-based DS improves accuracy of sonographic breast lesion assessment while reducing inter- and intraobserver variability.
Collapse
|
49
|
Hardy M, Harvey H. Artificial intelligence in diagnostic imaging: impact on the radiography profession. Br J Radiol 2020; 93:20190840. [PMID: 31821024 PMCID: PMC7362930 DOI: 10.1259/bjr.20190840] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 11/29/2019] [Accepted: 12/04/2019] [Indexed: 02/06/2023] Open
Abstract
The arrival of artificially intelligent systems into the domain of medical imaging has focused attention and sparked much debate on the role and responsibilities of the radiologist. However, discussion about the impact of such technology on the radiographer role is lacking. This paper discusses the potential impact of artificial intelligence (AI) on the radiography profession by assessing current workflow and cross-mapping potential areas of AI automation such as procedure planning, image acquisition and processing. We also highlight the opportunities that AI brings including enhancing patient-facing care, increased cross-modality education and working, increased technological expertise and expansion of radiographer responsibility into AI-supported image reporting and auditing roles.
Collapse
|
50
|
Chan HP, Samala RK, Hadjiiski LM. CAD and AI for breast cancer-recent development and challenges. Br J Radiol 2020; 93:20190580. [PMID: 31742424 PMCID: PMC7362917 DOI: 10.1259/bjr.20190580] [Citation(s) in RCA: 76] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Revised: 11/13/2019] [Accepted: 11/17/2019] [Indexed: 12/15/2022] Open
Abstract
Computer-aided diagnosis (CAD) has been a popular area of research and development in the past few decades. In CAD, machine learning methods and multidisciplinary knowledge and techniques are used to analyze the patient information and the results can be used to assist clinicians in their decision making process. CAD may analyze imaging information alone or in combination with other clinical data. It may provide the analyzed information directly to the clinician or correlate the analyzed results with the likelihood of certain diseases based on statistical modeling of the past cases in the population. CAD systems can be developed to provide decision support for many applications in the patient care processes, such as lesion detection, characterization, cancer staging, treatment planning and response assessment, recurrence and prognosis prediction. The new state-of-the-art machine learning technique, known as deep learning (DL), has revolutionized speech and text recognition as well as computer vision. The potential of major breakthrough by DL in medical image analysis and other CAD applications for patient care has brought about unprecedented excitement of applying CAD, or artificial intelligence (AI), to medicine in general and to radiology in particular. In this paper, we will provide an overview of the recent developments of CAD using DL in breast imaging and discuss some challenges and practical issues that may impact the advancement of artificial intelligence and its integration into clinical workflow.
Collapse
Affiliation(s)
- Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | - Ravi K. Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | | |
Collapse
|