1
|
Ju Y, Zhang G, Wan Y, Wang G, Shu R, Zhang P, Song H. Integration of AI lesion classification, age, and BI-RADS assessment to reduce benign biopsies on breast ultrasound. Eur Radiol 2025:10.1007/s00330-025-11467-7. [PMID: 40121344 DOI: 10.1007/s00330-025-11467-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 01/09/2025] [Accepted: 01/29/2025] [Indexed: 03/25/2025]
Abstract
OBJECTIVES To develop and test AI-integrated biopsy avoidance strategies to improve the specificity of screening breast ultrasound (US). MATERIALS AND METHODS This retrospective study included consecutive asymptomatic women with BI-RADS 3, 4a, 4b, 4c, or 5 masses on screening breast US exams acquired from two hospitals between December 2019 and December 2020 (development cohort) and June 2020 and December 2020 (external validation cohort). If more than one lesion was present, the most suspicious lesion was analyzed. Logistic regression was used to develop the AI-integrated biopsy avoidance strategies in which BI-RADS 4a masses were downgraded to BI-RADS 3 if the AI classifications were "both planes benign" in all women or "benign and malignant" in the women ≤ 45 years of age. Diagnostic performance metrics were calculated for both cohorts and compared to initial assessments by radiologists using the Wilcoxon rank-sum test for noninferiority of sensitivity (relative noninferiority margin, 5%) and the McNemar test for specificity. RESULTS The development and external validation cohorts consisted of 393 women (median age, 45 years [IQR, 40-50 years]) with 101 malignancies and 166 women (median age, 47 years [IQR, 42-51 years]) with 31 malignancies, respectively. The developed strategy improved specificity from 53.3% (72/135; 95% CI: 45.0, 62.1) to 80.7% (109/135; [95% CI: 74.2, 87.5]; p < 0.001) while maintaining sensitivity (both 100% [31/31; 95% CI: 98.9, 100]), and would have avoided 61.7% (37/60 [95% CI: 48.2, 73.7]) of benign biopsies of BI-RADS 4a masses in the external validation cohort. CONCLUSION A strategy integrating AI classification in two orthogonal planes, age, and BI-RADS classification improved the specificity of screening breast US while maintaining non-inferior sensitivity. KEY POINTS Question How can integrating AI lesion classification, age, and BI-RADS assessment effectively reduce benign biopsies in screening breast ultrasound? Findings A strategy integrating AI classifications, age, and BI-RADS using multivariable logistic regression improved specificity while maintaining non-inferior sensitivity in breast ultrasound screening. Clinical relevance The integration of AI classification in two orthogonal planes, along with patient age and BI-RADS classification, shows potential for reducing benign breast biopsies without compromising sensitivity, leading to more efficient clinical decision-making, reduced patient anxiety, and decreased healthcare resource utilization.
Collapse
Affiliation(s)
- Yan Ju
- Department of Ultrasound, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Ge Zhang
- Department of Ultrasound, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Yi Wan
- Department of Health Services, Fourth Military Medical University, Xi'an, China
| | - Gang Wang
- Department of Ultrasound, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Rui Shu
- Department of Ultrasound, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Panpan Zhang
- Department of Ultrasound, Taizhou Hospital of Zhejiang Province, Zhejiang University, Linhai, China
| | - Hongping Song
- Department of Ultrasound, Xijing Hospital, Fourth Military Medical University, Xi'an, China.
| |
Collapse
|
2
|
Cui XW, Goudie A, Blaivas M, Chai YJ, Chammas MC, Dong Y, Stewart J, Jiang TA, Liang P, Sehgal CM, Wu XL, Hsieh PCC, Adrian S, Dietrich CF. WFUMB Commentary Paper on Artificial intelligence in Medical Ultrasound Imaging. ULTRASOUND IN MEDICINE & BIOLOGY 2025; 51:428-438. [PMID: 39672681 DOI: 10.1016/j.ultrasmedbio.2024.10.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 10/24/2024] [Accepted: 10/31/2024] [Indexed: 12/15/2024]
Abstract
Artificial intelligence (AI) is defined as the theory and development of computer systems able to perform tasks normally associated with human intelligence. At present, AI has been widely used in a variety of ultrasound tasks, including in point-of-care ultrasound, echocardiography, and various diseases of different organs. However, the characteristics of ultrasound, compared to other imaging modalities, such as computed tomography (CT) and magnetic resonance imaging (MRI), poses significant additional challenges to AI. Application of AI can not only reduce variability during ultrasound image acquisition, but can standardize these interpretations and identify patterns that escape the human eye and brain. These advances have enabled greater innovations in ultrasound AI applications that can be applied to a variety of clinical settings and disease states. Therefore, The World Federation of Ultrasound in Medicine and Biology (WFUMB) is addressing the topic with a brief and practical overview of current and potential future AI applications in medical ultrasound, as well as discuss some current limitations and future challenges to AI implementation.
Collapse
Affiliation(s)
- Xin Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College and State Key Laboratory for Diagnosis and Treatment of Severe Zoonotic Infectious Diseases, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Adrian Goudie
- Department of Emergency, Fiona Stanley Hospital, Perth, Australia
| | - Michael Blaivas
- Department of Medicine, University of South Carolina School of Medicine, Columbia, SC, USA
| | - Young Jun Chai
- Department of Surgery, Seoul National University College of Medicine, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul, Republic of Korea
| | - Maria Cristina Chammas
- Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, São Paulo, Brazil
| | - Yi Dong
- Department of Ultrasound, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jonathon Stewart
- School of Medicine, The University of Western Australia, Perth, Western Australia, Australia
| | - Tian-An Jiang
- Department of Ultrasound Medicine, The First Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Ping Liang
- Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing, China
| | - Chandra M Sehgal
- Ultrasound Research Lab, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Xing-Long Wu
- School of Computer Science & Engineering, Wuhan Institute of Technology, Wuhan, Hubei, China
| | | | - Saftoiu Adrian
- Research Center of Gastroenterology and Hepatology, University of Medicine and Pharmacy of Craiova, Craiova, Romania
| | - Christoph F Dietrich
- Department General Internal Medicine (DAIM), Hospitals Hirslanden Bern Beau Site, Salem and Permanence, Bern, Switzerland.
| |
Collapse
|
3
|
Du L, Liu H, Cai M, Pan J, Zha H, Nie C, Lin M, Li C, Zong M, Zhang B. Ultrasound S-detect system can improve diagnostic performance of less experienced radiologists in differentiating breast masses: a retrospective dual-centre study. Br J Radiol 2025; 98:404-411. [PMID: 39535865 DOI: 10.1093/bjr/tqae233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2024] [Revised: 09/13/2024] [Accepted: 11/09/2024] [Indexed: 11/16/2024] Open
Abstract
OBJECTIVE To compare the performance of radiologists when assisted by an S-detect system with that of radiologists or an S-detect system alone in diagnosing breast masses on US images in a dual-centre setting. METHODS US images were retrospectively identified 296 breast masses (150 benign, 146 malignant) by investigators at 2 medical centres. Six radiologists from the 2 centres independently analysed the US images and classified each mass into categories 2-5. The radiologists then re-reviewed the images with the use of the S-detect system. The diagnostic value of radiologists alone, S-detect alone, and radiologists + S-detect were analysed and compared. RESULTS Radiologists had significantly decreased the average false negative rate (FNR) for diagnosing breast masses using S-detect system (-10.7%) (P < .001) and increased the area under the receiver operating characteristic curve (AUC) from 0.743 to 0.788 (P < .001). Seventy-seven out of 888 US images from 6 radiologists in this study were changed positively (from false positive to true negative or from false negative to true positive) with the S-detect, whereas 39 out of 888 US images were altered negatively. CONCLUSION Radiologists had better performance for the diagnosis of malignant breast masses on US images with an S-detect system than without. ADVANCES IN KNOWLEDGE The study reported an improvement in sensitivity and AUC particularly for low to intermediate-level radiologists, involved cases and radiologists from 2 different centres, and compared the diagnostic value of using S-detect system for masses of different sizes.
Collapse
Affiliation(s)
- Liwen Du
- Department of Ultrasound, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Hongli Liu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Mengjun Cai
- Department of Ultrasound, The Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing 210008, China
| | - Jiazhen Pan
- Department of Ultrasound, Jiangsu Cancer Hospital, Nanjing 210009, China
| | - Hailing Zha
- Department of Ultrasound, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Chenlei Nie
- Department of Ultrasound, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Minjia Lin
- Department of Ultrasound, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Cuiying Li
- Department of Ultrasound, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Min Zong
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Bo Zhang
- Department of Ultrasound, Shanghai East Clinical Medical College, Nanjing Medical University, Nanjing 211166, China
| |
Collapse
|
4
|
Hong YT, Yu ZH, Chou CP. Comparative Study of AI Modes in Ultrasound Diagnosis of Breast Lesions. Diagnostics (Basel) 2025; 15:560. [PMID: 40075807 PMCID: PMC11898511 DOI: 10.3390/diagnostics15050560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2024] [Revised: 02/16/2025] [Accepted: 02/25/2025] [Indexed: 03/14/2025] Open
Abstract
Objectives: This study evaluated the diagnostic performance of the S-Detect ultrasound system's three selectable AI modes-high-sensitivity (HSe), high-accuracy (HAc), and high-specificity (HSp)-for breast lesion diagnosis, comparing their performance in a clinical setting. Methods: This retrospective analysis evaluated 260 breast lesions from ultrasound images of 232 women (mean age: 50.2 years) using the S-Detect system. Each lesion was analyzed under the HSe, HAc, and HSp modes. The study employed ROC curve analysis to comprehensively compare the diagnostic performance of the AI modes against radiologist diagnoses. Subgroup analyses focused on the age (<45, 45-55, >55 years) and lesion size (<1 cm, 1-2 cm, >2 cm). Results: Among the 260 lesions, 73% were identified as benign and 27% as malignant. Radiologists achieved a sensitivity of 98.6%, specificity of 64.2%, and accuracy of 73.5%. The HSe mode exhibited the highest sensitivity at 95.7%. The HAc mode excelled with the highest accuracy (86.2%) and positive predictive value (71.3%), while the HSp mode had the highest specificity at 95.8%. In the age-based subgroup analyses, the HAc mode consistently showed the highest area under the curve (AUC) across all categories. The HSe mode achieved the highest AUC (0.726) for lesions smaller than 1 cm. In the case of lesions sized 1-2 cm and larger than 2 cm, the HAc mode showed the highest AUCs of 0.906 and 0.776, respectively. Conclusions: The S-Detect HSe mode matches radiologists' performance. Alternative modes provide sensitivity and specificity adjustments. The patient age and lesion size influence the diagnostic performance across all S-Detect modes.
Collapse
Affiliation(s)
- Yu-Ting Hong
- Radiology Department, Kaohsiung Veterans General Hospital, Kaohsiung 813414, Taiwan; (Y.-T.H.); (Z.-H.Y.)
| | - Zi-Han Yu
- Radiology Department, Kaohsiung Veterans General Hospital, Kaohsiung 813414, Taiwan; (Y.-T.H.); (Z.-H.Y.)
- Department of Radiology, Jiannren Hospital, Kaohsiung 813414, Taiwan
| | - Chen-Pin Chou
- Radiology Department, Kaohsiung Veterans General Hospital, Kaohsiung 813414, Taiwan; (Y.-T.H.); (Z.-H.Y.)
- Department of Medical Laboratory Science and Biotechnology, Fooyin University, Kaohsiung 831301, Taiwan
- Department of Pharmacy, College of Pharmacy, Tajen University, Pingtung 907101, Taiwan
| |
Collapse
|
5
|
Mendez M, Castillo F, Probyn L, Kras S, Tyrrell PN. Leveraging domain knowledge for synthetic ultrasound image generation: a novel approach to rare disease AI detection. Int J Comput Assist Radiol Surg 2025; 20:415-431. [PMID: 39739290 DOI: 10.1007/s11548-024-03309-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 12/06/2024] [Indexed: 01/02/2025]
Abstract
PURPOSE This study explores the use of deep generative models to create synthetic ultrasound images for the detection of hemarthrosis in hemophilia patients. Addressing the challenge of sparse datasets in rare disease diagnostics, the study aims to enhance AI model robustness and accuracy through the integration of domain knowledge into the synthetic image generation process. METHODS The study employed two ultrasound datasets: a base dataset (Db) of knee recess distension images from non-hemophiliac patients and a target dataset (Dt) of hemarthrosis images from hemophiliac patients. The synthetic generation framework included a content generator (Gc) trained on Db and a context generator (Gs) to adapt these images to match Dt's context. This approach generated a synthetic target dataset (Ds), primed for AI training in rare disease research. The assessment of synthetic image generation involved expert evaluations, statistical analysis, and the use of domain-invariant perceptual distance and Fréchet inception distance for quality measurement. RESULTS Expert evaluation revealed that images produced by our synthetic generation framework were comparable to real ones, with no significant difference in overall quality or anatomical accuracy. Additionally, the use of synthetic data in training convolutional neural networks demonstrated robustness in detecting hemarthrosis, especially with limited sample sizes. CONCLUSION This study presents a novel approach for generating synthetic ultrasound images for rare disease detection, such as hemarthrosis in hemophiliac knees. By leveraging deep generative models and integrating domain knowledge, the proposed framework successfully addresses the limitations of sparse datasets and enhances AI model training and robustness. The synthetic images produced are of high quality and contribute significantly to AI-driven diagnostics in rare diseases, highlighting the potential of synthetic data in medical imaging.
Collapse
Affiliation(s)
- M Mendez
- Institute of Medical Science, University of Toronto, Toronto, ON, Canada
- Department of Medical Imaging, University of Toronto, Toronto, ONM5T 1W7, Canada
| | - F Castillo
- Department of Medical Imaging, University of Toronto, Toronto, ONM5T 1W7, Canada
| | - L Probyn
- Department of Medical Imaging, University of Toronto, Toronto, ONM5T 1W7, Canada
| | - S Kras
- Mohawk College & McMaster Medical Radiation Sciences Program, McMaster University Mohawk College, Hamilton, ON, Canada
| | - P N Tyrrell
- Institute of Medical Science, University of Toronto, Toronto, ON, Canada.
- Department of Medical Imaging, University of Toronto, Toronto, ONM5T 1W7, Canada.
- Department of Statistical Sciences, University of Toronto, Toronto, ON, Canada.
| |
Collapse
|
6
|
Zhang P, Zhang M, Lu M, Jin C, Wang G, Lin X. Comparative Analysis of the Diagnostic Value of S-Detect Technology in Different Planes Versus the BI-RADS Classification for Breast Lesions. Acad Radiol 2025; 32:58-66. [PMID: 39138111 DOI: 10.1016/j.acra.2024.08.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 07/28/2024] [Accepted: 08/02/2024] [Indexed: 08/15/2024]
Abstract
RATIONALE AND OBJECTIVES S-Detect, a deep learning-based Computer-Aided Detection system, is recognized as an important tool for diagnosing breast lesions using ultrasound imaging. However, it may exhibit inconsistent findings across multiple imaging planes. This study aims to evaluate the diagnostic performance of S-Detect in different planes and identify factors contributing to these inconsistencies. MATERIALS AND METHODS A retrospective cohort study was conducted on 711 patients with 756 breast lesions between January 2019 and January 2022. S-Detect was utilized to assess lesions in radial and anti-radial planes. BI-RADS classifications were employed for comparative analysis. The diagnostic performance was compared within each group, and p-values were computed for intergroup comparisons. Univariable and multivariable analyses were conducted to identify factors contributing to diagnostic inconsistency in S-Detect across planes. RESULTS Among 756 breast lesions, 668 (88.4%) exhibited consistent S-Detect outcomes across planes while 88 (11.6%) were inconsistent. In the consistent group, the diagnostic accuracy and area under the curve (AUC) of S-Detect were significantly higher than those of BI-RADS (accuracy: 91.2% vs. 84.9%, p = 0.045; AUC: 0.916 vs. 0.859, p = 0.036). In the inconsistent group, the diagnostic accuracy and AUC of S-Detect in radial and anti-radial planes were lower than those of BI-RADS (accuracy: 47.7% for radial, 52.2% for anti-radial vs. 69.3% for BI-RADS, p = 0.014, p-anti = 0.039; AUC: 0.503 for radial, 0.497 for anti-radial vs. 0.739 for BI-RADS, p = 0.042, p-anti <0.001). Diagnostic inconsistency in S-Detect across planes was significantly associated with lesion size, indistinct or angular margins, and enhancement posterior acoustic features (p < 0.05). CONCLUSION S-Detect has outperformed BI-RADS in diagnostic precision under conditions of inter-planar concordance. However, its diagnostic efficacy is compromised in scenarios of inter-planar discordance. Under these circumstances, the results of S-Detect should be carefully referenced.
Collapse
Affiliation(s)
- Panpan Zhang
- Department of Ultrasound, The Affiliated Taizhou Hospital, Wenzhou Medical University, Linhai, Zhejiang Province, China
| | - Min Zhang
- Department of Ultrasound, The Affiliated Taizhou Hospital, Wenzhou Medical University, Linhai, Zhejiang Province, China
| | - Menglin Lu
- Department of Ultrasound, The Affiliated Taizhou Hospital, Wenzhou Medical University, Linhai, Zhejiang Province, China
| | - Chaoying Jin
- Department of Ultrasound, The Affiliated Taizhou Hospital, Wenzhou Medical University, Linhai, Zhejiang Province, China
| | - Gang Wang
- Department of Ultrasound, The Affiliated Taizhou Hospital, Wenzhou Medical University, Linhai, Zhejiang Province, China
| | - Xianfang Lin
- Department of Ultrasound, The Affiliated Taizhou Hospital, Wenzhou Medical University, Linhai, Zhejiang Province, China.
| |
Collapse
|
7
|
Jin ZY, Li JK, Niu RL, Fu NQ, Jiang Y, Li SY, Wang ZL. Application of an artificial intelligence-assisted diagnostic system for breast ultrasound: a prospective study. Gland Surg 2024; 13:2221-2231. [PMID: 39822361 PMCID: PMC11733640 DOI: 10.21037/gs-24-213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Accepted: 11/05/2024] [Indexed: 01/19/2025]
Abstract
Background Accurate diagnosis of breast cancer is of great importance to improve the prognosis of patients. Artificial intelligence (AI)-assisted diagnostic system for breast ultrasound is gradually being applied in the identification of benign and malignant breast lesions. This study aimed to evaluate the diagnostic performance and optimal application of AI-assisted ultrasonography for breast lesions in clinical setting. Methods A total of 501 consecutive patients with 679 breast lesions were prospectively included in the study. Junior and senior radiologists were asked to interpret images of lesions with and without AI assistance, respectively. Three application modes of AI were employed: AI alone, adjusted Breast Imaging Reporting and Data System (BI-RADS; incorporating BI-RADS obtained by AI into BI-RADS obtained by radiologists), and second reading mode (combining characteristic information extracted by AI to conduct a second reading so as to obtain a new BI-RADS). The diagnostic performances of these application modes were analyzed and compared. Results The area under the curve (AUC) of junior radiologists increased from 0.879 to 0.921 in BI-RADSsecond reading, which was higher than that in BI-RADSadjusted (0.901), similar to that in AI alone (0.924), and lower than that obtained by senior radiologists (0.950). Using BI-RADS category 4A as the threshold, the sensitivity of junior radiologists was found to increase from 0.83 to 0.92 (P<0.001). Furthermore, the specificity increased from 0.79 to 0.85, which was higher than those of AI alone and BI-RADSadjusted (P<0.001). The unnecessary biopsy rate decreased by 14.70% (P=0.01). For senior radiologists, the sensitivity increased from 0.91 to 0.96 (P=0.01). Similar results were observed in the subgroup analysis of lesions ≤2 cm. For lesions >2 cm, only the specificity of junior radiologists increased from 0.39 to 0.52 (P=0.03). Conclusions AI-assisted ultrasound is useful for the diagnosis of breast lesions, particularly for junior radiologists and lesions ≤2 cm. The use of the second reading mode can achieve excellent diagnostic performance.
Collapse
Affiliation(s)
- Zhi-Ying Jin
- Department of Ultrasound, The First Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Jun-Kang Li
- Department of Ultrasound, The First Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Rui-Lan Niu
- Department of Ultrasound, The First Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Nai-Qin Fu
- Department of Ultrasound, The First Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Ying Jiang
- Department of Ultrasound, The First Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Shi-Yu Li
- Department of Ultrasound, The First Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Zhi-Li Wang
- Department of Ultrasound, The First Medical Center, Chinese PLA General Hospital, Beijing, China
| |
Collapse
|
8
|
Singh S, Healy NA. The top 100 most-cited articles on artificial intelligence in breast radiology: a bibliometric analysis. Insights Imaging 2024; 15:297. [PMID: 39666106 PMCID: PMC11638451 DOI: 10.1186/s13244-024-01869-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2024] [Accepted: 11/24/2024] [Indexed: 12/13/2024] Open
Abstract
INTRODUCTION Artificial intelligence (AI) in radiology is a rapidly evolving field. In breast imaging, AI has already been applied in a real-world setting and multiple studies have been conducted in the area. The aim of this analysis is to identify the most influential publications on the topic of artificial intelligence in breast imaging. METHODS A retrospective bibliometric analysis was conducted on artificial intelligence in breast radiology using the Web of Science database. The search strategy involved searching for the keywords 'breast radiology' or 'breast imaging' and the various keywords associated with AI such as 'deep learning', 'machine learning,' and 'neural networks'. RESULTS From the top 100 list, the number of citations per article ranged from 30 to 346 (average 85). The highest cited article titled 'Artificial Neural Networks In Mammography-Application To Decision-Making In The Diagnosis Of Breast-Cancer' was published in Radiology in 1993. Eighty-three of the articles were published in the last 10 years. The journal with the greatest number of articles was Radiology (n = 22). The most common country of origin was the United States (n = 51). Commonly occurring topics published were the use of deep learning models for breast cancer detection in mammography or ultrasound, radiomics in breast cancer, and the use of AI for breast cancer risk prediction. CONCLUSION This study provides a comprehensive analysis of the top 100 most-cited papers on the subject of artificial intelligence in breast radiology and discusses the current most influential papers in the field. CLINICAL RELEVANCE STATEMENT This article provides a concise summary of the top 100 most-cited articles in the field of artificial intelligence in breast radiology. It discusses the most impactful articles and explores the recent trends and topics of research in the field. KEY POINTS Multiple studies have been conducted on AI in breast radiology. The most-cited article was published in the journal Radiology in 1993. This study highlights influential articles and topics on AI in breast radiology.
Collapse
Affiliation(s)
- Sneha Singh
- Department of Radiology, Royal College of Surgeons in Ireland, Dublin, Ireland.
- Beaumont Breast Centre, Beaumont Hospital, Dublin, Ireland.
| | - Nuala A Healy
- Department of Radiology, Royal College of Surgeons in Ireland, Dublin, Ireland
- Beaumont Breast Centre, Beaumont Hospital, Dublin, Ireland
- Department of Radiology, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
9
|
Wang P, Xia H, Liu L, Wang X, Yan L, Kong Z, Xu H, Huang B. Improving the Diagnostic Performance and Breast Imaging Reporting and Data System Category Agreement of Less Experienced Radiologists by Utilizing Computer-Aided Diagnosis Software for Breast Ultrasound. Ultrasound Q 2024; 40:e00695. [PMID: 39590515 DOI: 10.1097/ruq.0000000000000695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2024]
Abstract
This study aimed to assess the effectiveness of intelligence-based computer-aided diagnosis (CAD) software in ultrasound (US) and its potential to improve the diagnostic performance of less experienced radiologists, as well as the agreement on Breast Imaging Reporting and Data System (BI-RADS) categories with the experienced radiologist. Images of 385 breast lesions in 351 female taken from January 2019 to December 2020 were included. Two less experienced radiologists independently reviewed US images with and without CAD assistance, recording final assessments using the BI-RADS category. The diagnostic performance of CAD and radiologists were calculated and compared. Kappa statistics were used to determine agreement between the experienced radiologist and the less experienced radiologists, based on BI-RADS category before and after using CAD software. The sensitivity, specificity, accuracy, positive predictive value, and negative predictive value of CAD software were 95.5%, 71.5%, 81.3%, 69.8%, and 95.9%, respectively, and those were improved in junior radiologist and intermediate-level radiologist after the addition of CAD. Additionally, with the assistance of CAD, the area under the curve was improved for both the junior radiologist and radiologist (0.704 vs 0.847 and 0.876 vs 0.900, P = 0.009, 0.005), although it remained lower than the senior radiologist. The agreement of BI-RADS category between the less experienced and the experienced radiologists showed a significant improvement (P = 0.04, 0.000). The CAD on US could improve less experienced radiologists' diagnostic performance and agreement on BI-RADS categories, making it an effective decision-making tool in clinical practice.
Collapse
Affiliation(s)
- Peilei Wang
- Department of Ultrasound, Zhongshan Hospital Fudan University, Shanghai, China
| | | | | | | | | | | | | | | |
Collapse
|
10
|
Li H, Zhao J, Jiang Z. Deep learning-based computer-aided detection of ultrasound in breast cancer diagnosis: A systematic review and meta-analysis. Clin Radiol 2024; 79:e1403-e1413. [PMID: 39217049 DOI: 10.1016/j.crad.2024.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Revised: 07/05/2024] [Accepted: 08/01/2024] [Indexed: 09/04/2024]
Abstract
PURPOSE The aim of this meta-analysis was to assess the diagnostic performance of deep learning (DL) and ultrasound in breast cancer diagnosis. Additionally, we categorized the included studies into two subgroups: B-mode ultrasound diagnostic subgroup and multimodal ultrasound diagnostic subgroup, and compared the performance differences of DL algorithms in breast cancer diagnosis using only B-mode ultrasound or multimodal ultrasound. METHODS We conducted a comprehensive search for relevant studies published from January 01, 2017 to July 31, 2023 in the MEDLINE and EMBASE databases. The quality of the included studies was evaluated using the QUADAS-2 tool and radiomics quality scores (RQS). Meta-analysis was performed using R software. Inter-study heterogeneity was assessed by I^2 values and Q-test P-values, with sources of heterogeneity analyzed through a random effects model based on test results. Summary receiver operating characteristics (SROC) curves were used for meta-analysis across multiple trials, while combined sensitivity, specificity, and AUC were calculated to quantify prediction accuracy. Subgroup analysis and sensitivity analyses were also conducted to identify potential sources of study heterogeneity. Publication bias was assessed using the funnel plot method. (PROSPERO identifier: CRD42024545758). RESULTS The 20 studies included a total of 14,955 cases, with 4197 cases used for model testing. Among these cases were 1582 breast cancer patients and 2615 benign or other breast lesions. The combined sensitivity, specificity, and AUC values across all studies were found to be 0.93, 0.90, and 0.732, respectively. In subgroup analysis, the multimodal subgroup demonstrated superior performance with combined sensitivity, specificity, and AUC values of 0.93, 0.88, and 0.787, respectively; whereas the combined sensitivity, specificity, and AUC value for the model B subgroup was at a level of 0.92, 0.91, and 0.642, respectively. CONCLUSIONS The integration of DL with ultrasound demonstrates high accuracy in the adjunctive diagnosis of breast cancer, while the fusion of DL and multimodal breast ultrasound exhibits superior diagnostic efficacy compared to B-mode ultrasound alone.
Collapse
Affiliation(s)
- H Li
- Department of Ultrasound, Changzheng Hospital, Naval Medical University (Second Medical University), No.415, Fengyang Rd, Shanghai, China.
| | - J Zhao
- Department of Ultrasound, Changzheng Hospital, Naval Medical University (Second Medical University), No.415, Fengyang Rd, Shanghai, China; Department of Ultrasound, Shanghai Fourth People's Hospital, School of Medicine, Tongji University, No.1279, Sanmen Rd, Shanghai, China.
| | - Z Jiang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, No.516, Jungong Rd, Shanghai, China
| |
Collapse
|
11
|
Karthiga R, Narasimhan K, V T, M H, Amirtharajan R. Review of AI & XAI-based breast cancer diagnosis methods using various imaging modalities. MULTIMEDIA TOOLS AND APPLICATIONS 2024. [DOI: 10.1007/s11042-024-20271-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 08/27/2024] [Accepted: 09/11/2024] [Indexed: 01/02/2025]
|
12
|
Mathur A, Arya N, Pasupa K, Saha S, Roy Dey S, Saha S. Breast cancer prognosis through the use of multi-modal classifiers: current state of the art and the way forward. Brief Funct Genomics 2024; 23:561-569. [PMID: 38688724 DOI: 10.1093/bfgp/elae015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 03/01/2024] [Accepted: 04/09/2024] [Indexed: 05/02/2024] Open
Abstract
We present a survey of the current state-of-the-art in breast cancer detection and prognosis. We analyze the evolution of Artificial Intelligence-based approaches from using just uni-modal information to multi-modality for detection and how such paradigm shift facilitates the efficacy of detection, consistent with clinical observations. We conclude that interpretable AI-based predictions and ability to handle class imbalance should be considered priority.
Collapse
Affiliation(s)
- Archana Mathur
- Department of Information Science and Engineering, Nitte Meenakshi Institute of Technology, Yelahanka, 560064, Karnataka, India
| | - Nikhilanand Arya
- School of Computer Engineering, Kalinga Institute of Industrial Technology, Deemed to be University, Bhubaneshwar, 751024, Odisha, India
| | - Kitsuchart Pasupa
- School of Information Technology, King Mongkut's Institute of Technology Ladkrabang, 1 Soi Chalongkrung 1, 10520, Bangkok, Thailand
| | - Sriparna Saha
- Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801106, Bihar, India
| | - Sudeepa Roy Dey
- Department of Computer Science and Engineering, PES University, Hosur Road, 560100, Karnataka, India
| | - Snehanshu Saha
- CSIS and APPCAIR, BITS Pilani K.K Birla Goa Campus, Goa, 403726, Goa, India
- Div of AI Research, HappyMonk AI, Bangalore, 560078, Karnataka, India
| |
Collapse
|
13
|
Tian R, Lu G, Zhao N, Qian W, Ma H, Yang W. Constructing the Optimal Classification Model for Benign and Malignant Breast Tumors Based on Multifeature Analysis from Multimodal Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1386-1400. [PMID: 38381383 PMCID: PMC11300407 DOI: 10.1007/s10278-024-01036-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 01/28/2024] [Accepted: 02/02/2024] [Indexed: 02/22/2024]
Abstract
The purpose of this study was to fuse conventional radiomic and deep features from digital breast tomosynthesis craniocaudal projection (DBT-CC) and ultrasound (US) images to establish a multimodal benign-malignant classification model and evaluate its clinical value. Data were obtained from a total of 487 patients at three centers, each of whom underwent DBT-CC and US examinations. A total of 322 patients from dataset 1 were used to construct the model, while 165 patients from datasets 2 and 3 formed the prospective testing cohort. Two radiologists with 10-20 years of work experience and three sonographers with 12-20 years of work experience semiautomatically segmented the lesions using ITK-SNAP software while considering the surrounding tissue. For the experiments, we extracted conventional radiomic and deep features from tumors from DBT-CCs and US images using PyRadiomics and Inception-v3. Additionally, we extracted conventional radiomic features from four peritumoral layers around the tumors via DBT-CC and US images. Features were fused separately from the intratumoral and peritumoral regions. For the models, we tested the SVM, KNN, decision tree, RF, XGBoost, and LightGBM classifiers. Early fusion and late fusion (ensemble and stacking) strategies were employed for feature fusion. Using the SVM classifier, stacking fusion of deep features and three peritumoral radiomic features from tumors in DBT-CC and US images achieved the optimal performance, with an accuracy and AUC of 0.953 and 0.959 [CI: 0.886-0.996], a sensitivity and specificity of 0.952 [CI: 0.888-0.992] and 0.955 [0.868-0.985], and a precision of 0.976. The experimental results indicate that the fusion model of deep features and peritumoral radiomic features from tumors in DBT-CC and US images shows promise in differentiating benign and malignant breast tumors.
Collapse
Affiliation(s)
- Ronghui Tian
- College of Medicine and Biological Information Engineering, Northeastern University, No. 195 Chuangxin Road, Hunnan District, Shenyang, 110819, Liaoning Province, China
| | - Guoxiu Lu
- College of Medicine and Biological Information Engineering, Northeastern University, No. 195 Chuangxin Road, Hunnan District, Shenyang, 110819, Liaoning Province, China
- Department of Nuclear Medicine, General Hospital of Northern Theatre Command, No. 83 Wenhua Road, Shenhe District, Shenyang, 110016, Liaoning Province, China
| | - Nannan Zhao
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, No. 44 Xiaoheyan Road, Dadong District, Shenyang, 110042, Liaoning Province, China
| | - Wei Qian
- College of Medicine and Biological Information Engineering, Northeastern University, No. 195 Chuangxin Road, Hunnan District, Shenyang, 110819, Liaoning Province, China
| | - He Ma
- College of Medicine and Biological Information Engineering, Northeastern University, No. 195 Chuangxin Road, Hunnan District, Shenyang, 110819, Liaoning Province, China
| | - Wei Yang
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, No. 44 Xiaoheyan Road, Dadong District, Shenyang, 110042, Liaoning Province, China.
| |
Collapse
|
14
|
Wang H, Zhu H, Ding L, Yang K. Attention pyramid pooling network for artificial diagnosis on pulmonary nodules. PLoS One 2024; 19:e0302641. [PMID: 38753596 PMCID: PMC11098435 DOI: 10.1371/journal.pone.0302641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 04/09/2024] [Indexed: 05/18/2024] Open
Abstract
The development of automated tools using advanced technologies like deep learning holds great promise for improving the accuracy of lung nodule classification in computed tomography (CT) imaging, ultimately reducing lung cancer mortality rates. However, lung nodules can be difficult to detect and classify, from CT images since different imaging modalities may provide varying levels of detail and clarity. Besides, the existing convolutional neural network may struggle to detect nodules that are small or located in difficult-to-detect regions of the lung. Therefore, the attention pyramid pooling network (APPN) is proposed to identify and classify lung nodules. First, a strong feature extractor, named vgg16, is used to obtain features from CT images. Then, the attention primary pyramid module is proposed by combining the attention mechanism and pyramid pooling module, which allows for the fusion of features at different scales and focuses on the most important features for nodule classification. Finally, we use the gated spatial memory technique to decode the general features, which is able to extract more accurate features for classifying lung nodules. The experimental results on the LIDC-IDRI dataset show that the APPN can achieve highly accurate and effective for classifying lung nodules, with sensitivity of 87.59%, specificity of 90.46%, accuracy of 88.47%, positive predictive value of 95.41%, negative predictive value of 76.29% and area under receiver operating characteristic curve of 0.914.
Collapse
Affiliation(s)
- Hongfeng Wang
- School of Network Engineering, Zhoukou Normal University, Zhoukou, China
| | - Hai Zhu
- School of Network Engineering, Zhoukou Normal University, Zhoukou, China
| | - Lihua Ding
- College of Public Health, Zhengzhou University, Zhengzhou, China
| | - Kaili Yang
- Henan Provincial People’s Hospital, People’s Hospital of Zhengzhou University, Henan University People’s Hospital, Zhengzhou, China
| |
Collapse
|
15
|
Seth I, Lim B, Joseph K, Gracias D, Xie Y, Ross RJ, Rozen WM. Use of artificial intelligence in breast surgery: a narrative review. Gland Surg 2024; 13:395-411. [PMID: 38601286 PMCID: PMC11002485 DOI: 10.21037/gs-23-414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 02/21/2024] [Indexed: 04/12/2024]
Abstract
Background and Objective We have witnessed tremendous advances in artificial intelligence (AI) technologies. Breast surgery, a subspecialty of general surgery, has notably benefited from AI technologies. This review aims to evaluate how AI has been integrated into breast surgery practices, to assess its effectiveness in improving surgical outcomes and operational efficiency, and to identify potential areas for future research and application. Methods Two authors independently conducted a comprehensive search of PubMed, Google Scholar, EMBASE, and Cochrane CENTRAL databases from January 1, 1950, to September 4, 2023, employing keywords pertinent to AI in conjunction with breast surgery or cancer. The search focused on English language publications, where relevance was determined through meticulous screening of titles, abstracts, and full-texts, followed by an additional review of references within these articles. The review covered a range of studies illustrating the applications of AI in breast surgery encompassing lesion diagnosis to postoperative follow-up. Publications focusing specifically on breast reconstruction were excluded. Key Content and Findings AI models have preoperative, intraoperative, and postoperative applications in the field of breast surgery. Using breast imaging scans and patient data, AI models have been designed to predict the risk of breast cancer and determine the need for breast cancer surgery. In addition, using breast imaging scans and histopathological slides, models were used for detecting, classifying, segmenting, grading, and staging breast tumors. Preoperative applications included patient education and the display of expected aesthetic outcomes. Models were also designed to provide intraoperative assistance for precise tumor resection and margin status assessment. As well, AI was used to predict postoperative complications, survival, and cancer recurrence. Conclusions Extra research is required to move AI models from the experimental stage to actual implementation in healthcare. With the rapid evolution of AI, further applications are expected in the coming years including direct performance of breast surgery. Breast surgeons should be updated with the advances in AI applications in breast surgery to provide the best care for their patients.
Collapse
Affiliation(s)
- Ishith Seth
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia
- Central Clinical School at Monash University, The Alfred Centre, Melbourne, Victoria, Australia
| | - Bryan Lim
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia
- Central Clinical School at Monash University, The Alfred Centre, Melbourne, Victoria, Australia
| | - Konrad Joseph
- Department of Surgery, Port Macquarie Base Hospital, New South Wales, Australia
| | - Dylan Gracias
- Department of Surgery, Townsville Hospital, Queensland, Australia
| | - Yi Xie
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia
| | - Richard J. Ross
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia
- Central Clinical School at Monash University, The Alfred Centre, Melbourne, Victoria, Australia
| | - Warren M. Rozen
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia
- Central Clinical School at Monash University, The Alfred Centre, Melbourne, Victoria, Australia
| |
Collapse
|
16
|
He P, Chen W, Bai MY, Li J, Wang QQ, Fan LH, Zheng J, Liu CT, Zhang XR, Yuan XR, Song PJ, Cui LG. Application of computer-aided diagnosis to predict malignancy in BI-RADS 3 breast lesions. Heliyon 2024; 10:e24560. [PMID: 38304808 PMCID: PMC10831749 DOI: 10.1016/j.heliyon.2024.e24560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 01/09/2024] [Accepted: 01/10/2024] [Indexed: 02/03/2024] Open
Abstract
Purpose To evaluate the ability of computer-aided diagnosis (CAD) system (S-Detect) to identify malignancy in ultrasound (US) -detected BI-RADS 3 breast lesions. Materials and methods 148 patients with 148 breast lesions categorized as BI-RADS 3 were included in the study between January 2021 and September 2022. The malignancy rate, accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and area under the curve (AUC) were calculated. Results In this study, 143 breast lesions were found to be benign, and 5 breast lesions were malignant (malignancy rate, 3.4 %, 95 % confidence interval (CI): 0.5-6.3). The malignancy rate rose significantly to 18.2 % (4/22, 95 % CI: 2.1-34.3) in the high-risk group with a "possibly malignant" CAD result (p = 0.017). With a "possibly benign" CAD result, the malignancy rate decreased to 0.8 % (1/126, 95 % CI: 0-2.2) in the low-risk group (p = 0.297). The AUC, sensitivity, specificity, accuracy, PPV, and NPV of the CAD system in BI-RADS 3 breast lesions were 0.837 (95 % CI: 77.7-89.6), 80.0 % (95 % CI: 73.6-86.4), 87.4 % (95 % CI: 82.0-92.7), 87.2 % (95 % CI: 81.8-92.6), 18.2 % (95 % CI: 2.1-34.3) and 99.2 % (95 % CI: 97.8-100.0), respectively. Conclusions CAD system (S-Detect) enables radiologists to distinguish a high-risk group and a low-risk group among US-detected BI-RADS 3 breast lesions, so that patients in the low-risk group can receive follow-up without anxiety, while those in the high-risk group with a significantly increased malignancy rate should actively receive biopsy to avoid delayed diagnosis of breast cancer.
Collapse
Affiliation(s)
- Ping He
- Department of Ultrasound, Peking University Third Hospital, 49 North Garden Rd., Beijing, 100191, China
| | - Wen Chen
- Department of Ultrasound, Peking University Third Hospital, 49 North Garden Rd., Beijing, 100191, China
| | - Ming-Yu Bai
- Department of Ultrasound, Peking University Third Hospital, 49 North Garden Rd., Beijing, 100191, China
| | - Jun Li
- Department of Ultrasound, The First Affiliated Hospital of Medical College of Shihezi University, 107 North Second Rd., Shihezi, 832008, Xinjiang, China
| | - Qing-Qing Wang
- Department of Breast Ultrasonography, Center for Diagnosis and Treatment of Breast Diseases, Yili Maternity and Child Health Hospital, Sichuan Road, Economic Cooperation Zone, Yili Kazakh Autonomous Prefecture, Xinjiang Uyghur Autonomous Region, China
| | - Li-Hong Fan
- Department of Ultrasound, Jinzhong First People's Hospital, 689 South Huitong Rd. Yuci District 030600, Jinzhong City, Shanxi Province, China
| | - Jian Zheng
- Ultrasound Department of the Second Affiliated Hospital, School of Medicine, The Chinese University of Hong Kong, Shenzhen & Longgang District People's Hospital of Shenzhen, Shenzhen, 518172, China
| | - Chun-Tao Liu
- Department of Ultrasound, Liaocheng Dongchangfu District Maternal and Child Care Service Center, 129 Zhenxing West Rd., Liaocheng, 252000, Shandong, China
| | - Xiao-Rong Zhang
- Department of Ultrasound, Beijing HaiDian Hospital, 29 Zhongguanchun Rd., Beijing, 100080, China
| | - Xi-Rong Yuan
- Department of Ultrasound, The Second People's Hospital of Zhangqiu District, Jinan, Shandong, Ji Nan Zhang Qiu, 250200, China
| | - Peng-Jie Song
- Department of Ultrasound, Port Hospital of Hebei Port Group Co. LTD, 57 Dongshan Street, Haigang District, Qinhuangdao City, Hebei Province, China
| | - Li-Gang Cui
- Department of Ultrasound, Peking University Third Hospital, 49 North Garden Rd., Beijing, 100191, China
| |
Collapse
|
17
|
Dan Q, Xu Z, Burrows H, Bissram J, Stringer JSA, Li Y. Diagnostic performance of deep learning in ultrasound diagnosis of breast cancer: a systematic review. NPJ Precis Oncol 2024; 8:21. [PMID: 38280946 PMCID: PMC10821881 DOI: 10.1038/s41698-024-00514-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 12/08/2023] [Indexed: 01/29/2024] Open
Abstract
Deep learning (DL) has been widely investigated in breast ultrasound (US) for distinguishing between benign and malignant breast masses. This systematic review of test diagnosis aims to examine the accuracy of DL, compared to human readers, for the diagnosis of breast cancer in the US under clinical settings. Our literature search included records from databases including PubMed, Embase, Scopus, and Cochrane Library. Test accuracy outcomes were synthesized to compare the diagnostic performance of DL and human readers as well as to evaluate the assistive role of DL to human readers. A total of 16 studies involving 9238 female participants were included. There were no prospective studies comparing the test accuracy of DL versus human readers in clinical workflows. Diagnostic test results varied across the included studies. In 14 studies employing standalone DL systems, DL showed significantly lower sensitivities in 5 studies with comparable specificities and outperformed human readers at higher specificities in another 4 studies; in the remaining studies, DL models and human readers showed equivalent test outcomes. In 12 studies that assessed assistive DL systems, no studies proved the assistive role of DL in the overall diagnostic performance of human readers. Current evidence is insufficient to conclude that DL outperforms human readers or enhances the accuracy of diagnostic breast US in a clinical setting. Standardization of study methodologies is required to improve the reproducibility and generalizability of DL research, which will aid in clinical translation and application.
Collapse
Affiliation(s)
- Qing Dan
- Department of Ultrasound, Nanfang Hospital, Southern Medical University, 510515, Guangzhou, China
- Global Women's Health, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Ziting Xu
- Department of Ultrasound, Nanfang Hospital, Southern Medical University, 510515, Guangzhou, China
| | - Hannah Burrows
- Health Sciences Library, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Jennifer Bissram
- Health Sciences Library, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Jeffrey S A Stringer
- Global Women's Health, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
| | - Yingjia Li
- Department of Ultrasound, Nanfang Hospital, Southern Medical University, 510515, Guangzhou, China.
| |
Collapse
|
18
|
Lian W, Lian K, Lin T. Breast Imaging Reporting and Data System evaluation of breast lesions improved with virtual touch tissue imaging average grayscale values. Technol Health Care 2024; 32:925-936. [PMID: 37545278 DOI: 10.3233/thc-230306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
BACKGROUND Early breast cancer diagnosis is of great clinical importance for selecting treatment options, improving prognosis, and enhancing the quality of patients' survival. OBJECTIVE We investigated the value of virtual touch tissue imaging average grayscale values (VAGV) helper Breast Imaging Reporting and Data System (BI-RADS) in diagnosing breast malignancy. METHODS We retrospectively analyzed 141 breast tumors in 134 patients. All breast lesions were diagnosed pathologically by biopsy or surgical excision. All patients first underwent conventional ultrasound (US) followed by virtual touch tissue imaging (VTI). The measurement of the VAGV of the lesion was performed by Image J software. BI-RADS classification was performed for each lesion according to the US. We performed a two-by-two comparison of the diagnostic values of VAGV, BI-RADS, and BI-RADS+VAGV. RESULTS VAGV was lower in malignant tumors than in benign ones (35.82 ± 13.39 versus 73.58 ± 42.69, P< 0.001). The area under the receiver operating characteristic curve (AUC) value, sensitivity, and specificity of VAGV was 0.834, 84.09%, and 69.07%, respectively. Among BI-RADS, VAGV, and BI-RADS+VAGV, BI-RADS+VAGV had the highest AUC (0.926 versus 0.882, P= 0.0066; 0.926 versus 0.834, P= 0.0012). There was perfect agreement between the two radiologists using VAGV (ICC= 0.9796) and substantial agreement using BI-RADS (Kappa= 0.725). CONCLUSION Our study shows that VAGV can accurately diagnose breast cancer. VAGV effectively improves the diagnostic performance of BI-RADS.
Collapse
|
19
|
Kim DY, Oh HW, Suh CH. Reporting Quality of Research Studies on AI Applications in Medical Images According to the CLAIM Guidelines in a Radiology Journal With a Strong Prominence in Asia. Korean J Radiol 2023; 24:1179-1189. [PMID: 38016678 PMCID: PMC10701000 DOI: 10.3348/kjr.2023.1027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 10/25/2023] [Accepted: 10/26/2023] [Indexed: 11/30/2023] Open
Abstract
OBJECTIVE We aimed to evaluate the reporting quality of research articles that applied deep learning to medical imaging. Using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) guidelines and a journal with prominence in Asia as a sample, we intended to provide an insight into reporting quality in the Asian region and establish a journal-specific audit. MATERIALS AND METHODS A total of 38 articles published in the Korean Journal of Radiology between June 2018 and January 2023 were analyzed. The analysis included calculating the percentage of studies that adhered to each CLAIM item and identifying items that were met by ≤ 50% of the studies. The article review was initially conducted independently by two reviewers, and the consensus results were used for the final analysis. We also compared adherence rates to CLAIM before and after December 2020. RESULTS Of the 42 items in the CLAIM guidelines, 12 items (29%) were satisfied by ≤ 50% of the included articles. None of the studies reported handling missing data (item #13). Only one study respectively presented the use of de-identification methods (#12), intended sample size (#19), robustness or sensitivity analysis (#30), and full study protocol (#41). Of the studies, 35% reported the selection of data subsets (#10), 40% reported registration information (#40), and 50% measured inter and intrarater variability (#18). No significant changes were observed in the rates of adherence to these 12 items before and after December 2020. CONCLUSION The reporting quality of artificial intelligence studies according to CLAIM guidelines, in our study sample, showed room for improvement. We recommend that the authors and reviewers have a solid understanding of the relevant reporting guidelines and ensure that the essential elements are adequately reported when writing and reviewing the manuscripts for publication.
Collapse
Affiliation(s)
- Dong Yeong Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | | | - Chong Hyun Suh
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
20
|
Li JW, Sheng DL, Chen JG, You C, Liu S, Xu HX, Chang C. Artificial intelligence in breast imaging: potentials and challenges. Phys Med Biol 2023; 68:23TR01. [PMID: 37722385 DOI: 10.1088/1361-6560/acfade] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 09/18/2023] [Indexed: 09/20/2023]
Abstract
Breast cancer, which is the most common type of malignant tumor among humans, is a leading cause of death in females. Standard treatment strategies, including neoadjuvant chemotherapy, surgery, postoperative chemotherapy, targeted therapy, endocrine therapy, and radiotherapy, are tailored for individual patients. Such personalized therapies have tremendously reduced the threat of breast cancer in females. Furthermore, early imaging screening plays an important role in reducing the treatment cycle and improving breast cancer prognosis. The recent innovative revolution in artificial intelligence (AI) has aided radiologists in the early and accurate diagnosis of breast cancer. In this review, we introduce the necessity of incorporating AI into breast imaging and the applications of AI in mammography, ultrasonography, magnetic resonance imaging, and positron emission tomography/computed tomography based on published articles since 1994. Moreover, the challenges of AI in breast imaging are discussed.
Collapse
Affiliation(s)
- Jia-Wei Li
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Dan-Li Sheng
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Jian-Gang Chen
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication & Electronic Engineering, East China Normal University, People's Republic of China
| | - Chao You
- Department of Radiology, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, People's Republic of China
| | - Shuai Liu
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, People's Republic of China
| | - Hui-Xiong Xu
- Department of Ultrasound, Zhongshan Hospital, Institute of Ultrasound in Medicine and Engineering, Fudan University, Shanghai, 200032, People's Republic of China
| | - Cai Chang
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| |
Collapse
|
21
|
He P, Chen W, Bai MY, Li J, Wang QQ, Fan LH, Zheng J, Liu CT, Zhang XR, Yuan XR, Song PJ, Cui LG. Deep Learning-Based Computer-Aided Diagnosis for Breast Lesion Classification on Ultrasound: A Prospective Multicenter Study of Radiologists Without Breast Ultrasound Expertise. AJR Am J Roentgenol 2023; 221:450-459. [PMID: 37222275 DOI: 10.2214/ajr.23.29328] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
BACKGROUND. Computer-aided diagnosis (CAD) systems for breast ultrasound interpretation have been primarily evaluated at tertiary and/or urban medical centers by radiologists with breast ultrasound expertise. OBJECTIVE. The purpose of this study was to evaluate the usefulness of deep learning-based CAD software on the diagnostic performance of radiologists without breast ultrasound expertise at secondary or rural hospitals in the differentiation of benign and malignant breast lesions measuring up to 2.0 cm on ultrasound. METHODS. This prospective study included patients scheduled to undergo biopsy or surgical resection at any of eight participating secondary or rural hospitals in China of a breast lesion classified as BI-RADS category 3-5 on prior breast ultrasound from November 2021 to September 2022. Patients underwent an additional investigational breast ultrasound, performed and interpreted by a radiologist without breast ultrasound expertise (hybrid body/breast radiologists, either who lacked breast imaging subspecialty training or for whom the number of breast ultrasounds performed annually accounted for less than 10% of all ultrasounds performed annually by the radiologist), who assigned a BI-RADS category. CAD results were used to upgrade reader-assigned BI-RADS category 3 lesions to category 4A and to downgrade reader-assigned BI-RADS category 4A lesions to category 3. Histologic results of biopsy or resection served as the reference standard. RESULTS. The study included 313 patients (mean age, 47.0 ± 14.0 years) with 313 breast lesions (102 malignant, 211 benign). Of BI-RADS category 3 lesions, 6.0% (6/100) were upgraded by CAD to category 4A, of which 16.7% (1/6) were malignant. Of category 4A lesions, 79.1% (87/110) were downgraded by CAD to category 3, of which 4.6% (4/87) were malignant. Diagnostic performance was significantly better after application of CAD, in comparison with before application of CAD, in terms of accuracy (86.6% vs 62.6%, p < .001), specificity (82.9% vs 46.0%, p < .001), and PPV (72.7% vs 46.5%, p < .001) but not significantly different in terms of sensitivity (94.1% vs 97.1%, p = .38) or NPV (96.7% vs 97.0%, p > .99). CONCLUSION. CAD significantly improved radiologists' diagnostic performance, showing particular potential to reduce the frequency of benign breast biopsies. CLINICAL IMPACT. The findings indicate the ability of CAD to improve patient care in settings with incomplete access to breast imaging expertise.
Collapse
Affiliation(s)
- Ping He
- Department of Ultrasound, Peking University Third Hospital, 49 N Garden Rd, Beijing 100191, China
| | - Wen Chen
- Department of Ultrasound, Peking University Third Hospital, 49 N Garden Rd, Beijing 100191, China
| | - Ming-Yu Bai
- Department of Ultrasound, Peking University Third Hospital, 49 N Garden Rd, Beijing 100191, China
| | - Jun Li
- Department of Ultrasound, The First Affiliated Hospital of Medical College of Shihezi University, Xinjiang, China
| | - Qing-Qing Wang
- Department of Breast Sonography, Center for Diagnosis and Treatment of Breast Diseases, Yili Maternity and Child Health Hospital, Xinjiang, China
| | - Li-Hong Fan
- Department of Ultrasound, Jinzhong First People's Hospital, Jinzhong City, China
| | - Jian Zheng
- Ultrasound Department of The Second Affiliated Hospital, School of Medicine, The Chinese University of Hong Kong, Shenzhen & Longgang District People's Hospital of Shenzhen, Shenzhen, China
| | - Chun-Tao Liu
- Department of Ultrasound, Liaocheng Dongchangfu District Maternal and Child Care Service Center, Shandong, China
| | - Xiao-Rong Zhang
- Department of Ultrasound, Beijing HaiDian Hospital, Beijing, China
| | - Xi-Rong Yuan
- Department of Ultrasound, The Second People's Hospital of Zhangqiu District, Jinan, China
| | - Peng-Jie Song
- Department of Ultrasound, Port Hospital of Hebei Port Group Co. LTD, Qinhuangdao City, China
| | - Li-Gang Cui
- Department of Ultrasound, Peking University Third Hospital, 49 N Garden Rd, Beijing 100191, China
| |
Collapse
|
22
|
Chen YY, Chuang CH, Hsieh SL, Lin TL, Hsu CJ. A Combination of FPU-Net and Feature Clustering Methods for Accurate Segmentation of Femoral Neck in Radiographic Diagnosis. Diagnostics (Basel) 2023; 13:2855. [PMID: 37685393 PMCID: PMC10486373 DOI: 10.3390/diagnostics13172855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 08/08/2023] [Accepted: 09/01/2023] [Indexed: 09/10/2023] Open
Abstract
In this study, we develop an innovative method that assists computer-aided diagnosis in the determination process of the exact location of the femoral neck junction in plain radiographs. Our algorithm consists of two phases, i.e., coarse prediction and fine matching, which are implemented by supervised deep learning method and unsupervised clustering, respectively. In coarse prediction, standard masks are first produced by a specialist and trained in our proposed feature propagation network (FPU-Net) with supervised learning on the femoral neck dataset. In fine matching, the standard masks are first classified into different categories using our proposed three parameters with unsupervised learning. The predicted mask from FPU-Net is matched with each category of standard masks by calculating the values of intersection of union (IOU), and finally the predicted mask is substituted by the standard mask with the largest IOU value. A total of 4320 femoral neck parts in anterior-posterior (AP) pelvis radiographs collected from China Medical University Hospital database were used to test our method. Simulation results show that, on the one hand, compared with other segmentation methods, the method proposed in this paper has a larger IOU value and better suppression of noise outside the region of interest; on the other hand, the introduction of unsupervised learning for fine matching can help in the accurate localization segmentation of femoral neck images. Accurate femoral neck segmentation can assist surgeons to diagnose and reduce the misdiagnosis rate and burden.
Collapse
Affiliation(s)
- Y. Y. Chen
- Department of Artificial Intelligence and Computer Engineering, National Chin-Yi University of Technology, Taichung 411030, Taiwan; (Y.Y.C.); (C.H.C.)
| | - C. H. Chuang
- Department of Artificial Intelligence and Computer Engineering, National Chin-Yi University of Technology, Taichung 411030, Taiwan; (Y.Y.C.); (C.H.C.)
| | - S. L. Hsieh
- Department of Orthopedic Surgery, China Medical University Hospital, Taichung 404327, Taiwan; (T.L.L.); (C.J.H.)
| | - T. L. Lin
- Department of Orthopedic Surgery, China Medical University Hospital, Taichung 404327, Taiwan; (T.L.L.); (C.J.H.)
| | - C. J. Hsu
- Department of Orthopedic Surgery, China Medical University Hospital, Taichung 404327, Taiwan; (T.L.L.); (C.J.H.)
| |
Collapse
|
23
|
Yao S, Zhang B, Fei X, Xiao M, Lu L, Liu D, Zhang S, Cui J. AI-Assisted Ultrasound for the Early Diagnosis of Antibody-Negative Autoimmune Thyroiditis. J Multidiscip Healthc 2023; 16:1801-1810. [PMID: 37404960 PMCID: PMC10315148 DOI: 10.2147/jmdh.s408117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 05/12/2023] [Indexed: 07/06/2023] Open
Abstract
The prevalence of antibody-negative chronic autoimmune thyroiditis (SN-CAT) is increasing. The early diagnosis of SN-CAT can effectively prevent its further development. Thyroid ultrasound can diagnose autoimmune thyroiditis and predict hypothyroidism. Primary hypothyroidism with a hypoechoic pattern suggested by thyroid ultrasound and negative thyroid serum antibodies is the main basis for the diagnosis of SN-CAT. However, for early SN-CAT, only hypoechoic thyroid changes and serological antibodies are currently available. This study explored how to achieve an accurate and early diagnosis of SN-CAT and prevent the development of SN-CAT combined with hypothyroidism. The diagnosis of a hypoechoic thyroid by artificial intelligence is expected to be a breakthrough in the accurate diagnosis of SN-CAT.
Collapse
Affiliation(s)
- Shengsheng Yao
- China Medical University - Department of Thyroid and Breast Surgery, Liaoning Provincial People’s Hospital, Shenyang, Liaoning Province, 110015, People’s Republic of China
| | - Bo Zhang
- Department of Science and Education, The 10th Division of Xinjiang Production and Construction Corps, Beitun General Hospital, Beitun City, Xinjiang Province, 831300, People’s Republic of China
| | - Xiang Fei
- Department of Thyroid and Breast Surgery, People’s Hospital of China Medical University (Liaoning Provincial People’s Hospital), Shenyang, Liaoning Province, 110015, People's Republic of China
| | - Mingming Xiao
- Department of Pathology, People’s Hospital of China Medical University (Liaoning Provincial People’s Hospital), Shenyang, Liaoning Province, 110015, People’s Republic of China
| | - Li Lu
- Department of Endocrinology, People’s Hospital of China Medical University (Liaoning Provincial People’s Hospital), Shenyang, Liaoning Province, 110015, People’s Republic of China
| | - Daming Liu
- Department of Ultrasound, People’s Hospital of China Medical University (Liaoning Provincial People’s Hospital), Shenyang, Liaoning Province, 110015, People’s Republic of China
| | - Siyuan Zhang
- Department of Thyroid and Breast Surgery, The 10th Division of Xinjiang Production and Construction Corps, Beitun General Hospital, Beitun City, Xinjiang Province, 831300, People’s Republic of China
| | - Jianchun Cui
- Department of Thyroid and Breast Surgery, People’s Hospital of China Medical University (Liaoning Provincial People’s Hospital), Shenyang, Liaoning Province, 110015, People's Republic of China
| |
Collapse
|
24
|
Afrin H, Larson NB, Fatemi M, Alizad A. Deep Learning in Different Ultrasound Methods for Breast Cancer, from Diagnosis to Prognosis: Current Trends, Challenges, and an Analysis. Cancers (Basel) 2023; 15:3139. [PMID: 37370748 PMCID: PMC10296633 DOI: 10.3390/cancers15123139] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 06/02/2023] [Accepted: 06/08/2023] [Indexed: 06/29/2023] Open
Abstract
Breast cancer is the second-leading cause of mortality among women around the world. Ultrasound (US) is one of the noninvasive imaging modalities used to diagnose breast lesions and monitor the prognosis of cancer patients. It has the highest sensitivity for diagnosing breast masses, but it shows increased false negativity due to its high operator dependency. Underserved areas do not have sufficient US expertise to diagnose breast lesions, resulting in delayed management of breast lesions. Deep learning neural networks may have the potential to facilitate early decision-making by physicians by rapidly yet accurately diagnosing and monitoring their prognosis. This article reviews the recent research trends on neural networks for breast mass ultrasound, including and beyond diagnosis. We discussed original research recently conducted to analyze which modes of ultrasound and which models have been used for which purposes, and where they show the best performance. Our analysis reveals that lesion classification showed the highest performance compared to those used for other purposes. We also found that fewer studies were performed for prognosis than diagnosis. We also discussed the limitations and future directions of ongoing research on neural networks for breast ultrasound.
Collapse
Affiliation(s)
- Humayra Afrin
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Nicholas B. Larson
- Department of Quantitative Health Sciences, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Mostafa Fatemi
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Azra Alizad
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| |
Collapse
|
25
|
Ji H, Zhu Q, Ma T, Cheng Y, Zhou S, Ren W, Huang H, He W, Ran H, Ruan L, Guo Y, Tian J, Chen W, Chen L, Wang Z, Zhou Q, Niu L, Zhang W, Yang R, Chen Q, Zhang R, Wang H, Li L, Liu M, Nie F, Zhou A. Development and validation of a transformer-based CAD model for improving the consistency of BI-RADS category 3-5 nodule classification among radiologists: a multiple center study. Quant Imaging Med Surg 2023; 13:3671-3687. [PMID: 37284087 PMCID: PMC10240028 DOI: 10.21037/qims-22-1091] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Accepted: 04/07/2023] [Indexed: 06/08/2023]
Abstract
Background Significant differences exist in the classification outcomes for radiologists using ultrasonography-based Breast Imaging Reporting and Data Systems for diagnosing category 3-5 (BI-RADS 3-5) breast nodules, due to a lack of clear and distinguishing image features. Consequently, this retrospective study investigated the improvement of BI-RADS 3-5 classification consistency using a transformer-based computer-aided diagnosis (CAD) model. Methods Independently, 5 radiologists performed BI-RADS annotations on 21,332 breast ultrasonographic images collected from 3,978 female patients from 20 clinical centers in China. All images were divided into training, validation, testing, and sampling sets. The trained transformer-based CAD model was then used to classify test images, for which sensitivity (SEN), specificity (SPE), accuracy (ACC), area under the curve (AUC), and calibration curve were evaluated. Variations in these metrics among the 5 radiologists were analyzed by referencing BI-RADS classification results for the sampling test set provided by CAD to determine whether classification consistency (the k value), SEN, SPE, and ACC could be improved. Results After the training set (11,238 images) and validation set (2,996 images) were learned by the CAD model, the classification ACC of the CAD model applied to the test set (7,098 images) was 94.89% in category 3, 96.90% in category 4A, 95.49% in category 4B, 92.28% in category 4C, and 95.45% in category 5 nodules. Based on pathological results, the AUC of the CAD model was 0.924 and the predicted probability of CAD was a little higher than the actual probability in the calibration curve. After referencing BI-RADS classification results, the adjustments were made to 1,583 nodules, of which 905 were classified to a lower category and 678 to a higher category in the sampling test set. As a result, the ACC (72.41-82.65%), SEN (32.73-56.98%), and SPE (82.46-89.26%) of the classification by each radiologist were significantly improved on average, with the consistency (k values) in almost all of them increasing to >0.6. Conclusions The radiologist's classification consistency was markedly improved with almost all the k values increasing by a value greater than 0.6, and the diagnostic efficiency was also improved by approximately 24% (32.73% to 56.98%) and 7% (82.46% to 89.26%) for SEN and SPE, respectively, of the total classification on average. The transformer-based CAD model can help to improve the radiologist's diagnostic efficacy and consistency with others in the classification of BI-RADS 3-5 nodules.
Collapse
Affiliation(s)
- Hongtao Ji
- Department of Diagnostic Ultrasound, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Qiang Zhu
- Department of Diagnostic Ultrasound, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Teng Ma
- Department of Diagnostic Ultrasound, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yun Cheng
- Department of Diagnostic Ultrasound, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Shuai Zhou
- Department of Diagnostic Ultrasound, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wei Ren
- Department of Diagnostic Ultrasound, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Huilian Huang
- Department of Diagnostic Ultrasound, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wen He
- Department of Ultrasonography, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Haitao Ran
- Department of Ultrasound, The Second Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Litao Ruan
- Department of Medical Ultrasound, The First Affiliated Hospital, Xi’an Jiaotong University, Xi’an, China
| | - Yanli Guo
- Department of Ultrasound, The Southwest Hospital, Army Medical University, Chongqing, China
| | - Jiawei Tian
- Department of Ultrasound, The Second Affiliated Hospital, Harbin Medical University, Harbin, China
| | - Wu Chen
- Department of Ultrasound, The First Hospital, Shanxi Medical University, Taiyuan, China
| | - Luzeng Chen
- Department of Ultrasound, The First Hospital, Peking University, Beijing, China
| | - Zhiyuan Wang
- Department of Ultrasound, Diagnosis Center of Ultrasound, Hunan Province Cancer Hospital, Changsha, China
| | - Qi Zhou
- Department of Ultrasound, The Second Affiliated Hospital, Xi’an Jiaotong University, Xi’an, China
| | - Lijuan Niu
- Department of Ultrasound, Cancer Hospital, National Cancer Center, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Wei Zhang
- Department of Ultrasonography, The Third Affiliated Hospital, Guangxi Medical University, Nanning, China
| | - Ruimin Yang
- Department of Ultrasound, The Frist Affiliated Hospital of Hebei North University, Zhangjiakou, China
| | - Qin Chen
- Department of Ultrasound, Sichuan Provincial People’s Hospital, University of Electronic Science and Technology of China, Chengdu, China
| | - Ruifang Zhang
- Department of Ultrasound, The First Affiliated Hospital, Zhengzhou University, Zhengzhou, China
| | - Hui Wang
- Department of Ultrasound, China-Japan Union Hospital, Jilin University, Changchun, China
| | - Li Li
- Department of Ultrasound, Qilu Hospital of Shandong University, Qingdao, China
| | - Minghui Liu
- Department of Ultrasound Diagnosis, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Fang Nie
- Department of Ultrasound, Lanzhou University Second Hospital, Lanzhou, China
| | - Aiyun Zhou
- Department of Ultrasound, The First Affiliated Hospital, Nanchang University, Nanchang, China
| |
Collapse
|
26
|
Zhao G, Kong D, Xu X, Hu S, Li Z, Tian J. Deep learning-based classification of breast lesions using dynamic ultrasound video. Eur J Radiol 2023; 165:110885. [PMID: 37290361 DOI: 10.1016/j.ejrad.2023.110885] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Revised: 03/27/2023] [Accepted: 05/17/2023] [Indexed: 06/10/2023]
Abstract
PURPOSE We intended to develop a deep-learning-based classification model based on breast ultrasound dynamic video, then evaluate its diagnostic performance in comparison with the classic model based on ultrasound static image and that of different radiologists. METHOD We collected 1000 breast lesions from 888 patients from May 2020 to December 2021. Each lesion contained two static images and two dynamic videos. We divided these lesions randomly into training, validation, and test sets by the ratio of 7:2:1. Two deep learning (DL) models, namely DL-video and DL-image, were developed based on 3D Resnet-50 and 2D Resnet-50 using 2000 dynamic videos and 2000 static images, respectively. Lesions in the test set were evaluated to compare the diagnostic performance of two models and six radiologists with different seniority. RESULTS The area under the curve of the DL-video model was significantly higher than those of the DL-image model (0.969 vs. 0.925, P = 0.0172) and six radiologists (0.969 vs. 0.779-0.912, P < 0.05). All radiologists performed better when evaluating the dynamic videos compared to the static images. Furthermore, radiologists performed better with increased seniority both in reading images and videos. CONCLUSIONS The DL-video model can discern more detailed spatial and temporal information for accurate classification of breast lesions than the conventional DL-image model and radiologists, and its clinical application can further improve the diagnosis of breast cancer.
Collapse
Affiliation(s)
- Guojia Zhao
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China; Department of Ultrasound, Lin Yi People's Hospital, Linyi, Shandong, China
| | | | - Xiangli Xu
- The Second Hospital of Harbin, Harbin, Heilongjiang, China
| | - Shunbo Hu
- Lin Yi University, Linyi, Shandong, China.
| | - Ziyao Li
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China.
| | - Jiawei Tian
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China.
| |
Collapse
|
27
|
Berg WA, López Aldrete AL, Jairaj A, Ledesma Parea JC, García CY, McClennan RC, Cen SY, Larsen LH, de Lara MTS, Love S. Toward AI-supported US Triage of Women with Palpable Breast Lumps in a Low-Resource Setting. Radiology 2023; 307:e223351. [PMID: 37129492 PMCID: PMC10323289 DOI: 10.1148/radiol.223351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 02/20/2023] [Accepted: 03/15/2023] [Indexed: 05/03/2023]
Abstract
Background Most low- and middle-income countries lack access to organized breast cancer screening, and women with lumps may wait months for diagnostic assessment. Purpose To demonstrate that artificial intelligence (AI) software applied to breast US images obtained with low-cost portable equipment and by minimally trained observers could accurately classify palpable breast masses for triage in a low-resource setting. Materials and Methods This prospective multicenter study evaluated participants with at least one palpable mass who were enrolled in a hospital in Jalisco, Mexico, from December 2017 through May 2021. Orthogonal US images were obtained first with portable US with and without calipers of any findings at the site of lump and adjacent tissue. Then women were imaged with standard-of-care (SOC) US with Breast Imaging Reporting and Data System assessments by a radiologist. After exclusions, 758 masses in 300 women were analyzable by AI, with outputs of benign, probably benign, suspicious, and malignant. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were determined. Results The mean patient age ± SD was 50.0 years ± 12.5 (range, 18-92 years) and mean largest lesion diameter was 13 mm ± 8 (range, 2-54 mm). Of 758 masses, 360 (47.5%) were palpable and 56 (7.4%) malignant, including six ductal carcinoma in situ. AI correctly identified 47 or 48 of 49 women (96%-98%) with cancer with either portable US or SOC US images, with AUCs of 0.91 and 0.95, respectively. One circumscribed invasive ductal carcinoma was classified as probably benign with SOC US, ipsilateral to a spiculated invasive ductal carcinoma. Of 251 women with benign masses, 168 (67%) imaged with SOC US were classified as benign or probably benign by AI, as were 96 of 251 masses (38%, P < .001) with portable US. AI performance with images obtained by a radiologist was significantly better than with images obtained by a minimally trained observer. Conclusion AI applied to portable US images of breast masses can accurately identify malignancies. Moderate specificity, which could triage 38%-67% of women with benign masses without tertiary referral, should further improve with AI and observer training with portable US. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Slanetz in this issue.
Collapse
Affiliation(s)
- Wendie A. Berg
- From the Department of Radiology, University of Pittsburgh School of
Medicine, Magee-Womens Hospital, 300 Halket St, Pittsburgh, PA 15213 (W.A.B.);
Departments of Gynecology (A.L.L.A., C.Y.G.) and Radiology (J.C.L.P.), Hospital
Valentín Gómez Farias, Zapopan, Mexico; Koios Medical, New York,
NY (A.J., R.C.M.); Department of Radiology, Keck School of Medicine of USC, Los
Angeles, Calif (S.Y.C., L.H.L.); and Dr Susan Love Research Foundation, West
Hollywood, Calif (M.T.S.d.L., S.L.)
| | - Ana-Lilia López Aldrete
- From the Department of Radiology, University of Pittsburgh School of
Medicine, Magee-Womens Hospital, 300 Halket St, Pittsburgh, PA 15213 (W.A.B.);
Departments of Gynecology (A.L.L.A., C.Y.G.) and Radiology (J.C.L.P.), Hospital
Valentín Gómez Farias, Zapopan, Mexico; Koios Medical, New York,
NY (A.J., R.C.M.); Department of Radiology, Keck School of Medicine of USC, Los
Angeles, Calif (S.Y.C., L.H.L.); and Dr Susan Love Research Foundation, West
Hollywood, Calif (M.T.S.d.L., S.L.)
| | - Ajit Jairaj
- From the Department of Radiology, University of Pittsburgh School of
Medicine, Magee-Womens Hospital, 300 Halket St, Pittsburgh, PA 15213 (W.A.B.);
Departments of Gynecology (A.L.L.A., C.Y.G.) and Radiology (J.C.L.P.), Hospital
Valentín Gómez Farias, Zapopan, Mexico; Koios Medical, New York,
NY (A.J., R.C.M.); Department of Radiology, Keck School of Medicine of USC, Los
Angeles, Calif (S.Y.C., L.H.L.); and Dr Susan Love Research Foundation, West
Hollywood, Calif (M.T.S.d.L., S.L.)
| | - Juan Carlos Ledesma Parea
- From the Department of Radiology, University of Pittsburgh School of
Medicine, Magee-Womens Hospital, 300 Halket St, Pittsburgh, PA 15213 (W.A.B.);
Departments of Gynecology (A.L.L.A., C.Y.G.) and Radiology (J.C.L.P.), Hospital
Valentín Gómez Farias, Zapopan, Mexico; Koios Medical, New York,
NY (A.J., R.C.M.); Department of Radiology, Keck School of Medicine of USC, Los
Angeles, Calif (S.Y.C., L.H.L.); and Dr Susan Love Research Foundation, West
Hollywood, Calif (M.T.S.d.L., S.L.)
| | - Claudia Yolanda García
- From the Department of Radiology, University of Pittsburgh School of
Medicine, Magee-Womens Hospital, 300 Halket St, Pittsburgh, PA 15213 (W.A.B.);
Departments of Gynecology (A.L.L.A., C.Y.G.) and Radiology (J.C.L.P.), Hospital
Valentín Gómez Farias, Zapopan, Mexico; Koios Medical, New York,
NY (A.J., R.C.M.); Department of Radiology, Keck School of Medicine of USC, Los
Angeles, Calif (S.Y.C., L.H.L.); and Dr Susan Love Research Foundation, West
Hollywood, Calif (M.T.S.d.L., S.L.)
| | - R. Chad McClennan
- From the Department of Radiology, University of Pittsburgh School of
Medicine, Magee-Womens Hospital, 300 Halket St, Pittsburgh, PA 15213 (W.A.B.);
Departments of Gynecology (A.L.L.A., C.Y.G.) and Radiology (J.C.L.P.), Hospital
Valentín Gómez Farias, Zapopan, Mexico; Koios Medical, New York,
NY (A.J., R.C.M.); Department of Radiology, Keck School of Medicine of USC, Los
Angeles, Calif (S.Y.C., L.H.L.); and Dr Susan Love Research Foundation, West
Hollywood, Calif (M.T.S.d.L., S.L.)
| | - Steven Yong Cen
- From the Department of Radiology, University of Pittsburgh School of
Medicine, Magee-Womens Hospital, 300 Halket St, Pittsburgh, PA 15213 (W.A.B.);
Departments of Gynecology (A.L.L.A., C.Y.G.) and Radiology (J.C.L.P.), Hospital
Valentín Gómez Farias, Zapopan, Mexico; Koios Medical, New York,
NY (A.J., R.C.M.); Department of Radiology, Keck School of Medicine of USC, Los
Angeles, Calif (S.Y.C., L.H.L.); and Dr Susan Love Research Foundation, West
Hollywood, Calif (M.T.S.d.L., S.L.)
| | - Linda H. Larsen
- From the Department of Radiology, University of Pittsburgh School of
Medicine, Magee-Womens Hospital, 300 Halket St, Pittsburgh, PA 15213 (W.A.B.);
Departments of Gynecology (A.L.L.A., C.Y.G.) and Radiology (J.C.L.P.), Hospital
Valentín Gómez Farias, Zapopan, Mexico; Koios Medical, New York,
NY (A.J., R.C.M.); Department of Radiology, Keck School of Medicine of USC, Los
Angeles, Calif (S.Y.C., L.H.L.); and Dr Susan Love Research Foundation, West
Hollywood, Calif (M.T.S.d.L., S.L.)
| | - M. Teresa Soler de Lara
- From the Department of Radiology, University of Pittsburgh School of
Medicine, Magee-Womens Hospital, 300 Halket St, Pittsburgh, PA 15213 (W.A.B.);
Departments of Gynecology (A.L.L.A., C.Y.G.) and Radiology (J.C.L.P.), Hospital
Valentín Gómez Farias, Zapopan, Mexico; Koios Medical, New York,
NY (A.J., R.C.M.); Department of Radiology, Keck School of Medicine of USC, Los
Angeles, Calif (S.Y.C., L.H.L.); and Dr Susan Love Research Foundation, West
Hollywood, Calif (M.T.S.d.L., S.L.)
| | - Susan Love
- From the Department of Radiology, University of Pittsburgh School of
Medicine, Magee-Womens Hospital, 300 Halket St, Pittsburgh, PA 15213 (W.A.B.);
Departments of Gynecology (A.L.L.A., C.Y.G.) and Radiology (J.C.L.P.), Hospital
Valentín Gómez Farias, Zapopan, Mexico; Koios Medical, New York,
NY (A.J., R.C.M.); Department of Radiology, Keck School of Medicine of USC, Los
Angeles, Calif (S.Y.C., L.H.L.); and Dr Susan Love Research Foundation, West
Hollywood, Calif (M.T.S.d.L., S.L.)
| |
Collapse
|
28
|
Wang Y, Tang L, Chen P, Chen M. The Role of a Deep Learning-Based Computer-Aided Diagnosis System and Elastography in Reducing Unnecessary Breast Lesion Biopsies. Clin Breast Cancer 2023; 23:e112-e121. [PMID: 36653206 DOI: 10.1016/j.clbc.2022.12.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2022] [Revised: 11/27/2022] [Accepted: 12/20/2022] [Indexed: 12/24/2022]
Abstract
OBJECTIVES Ultrasound examination has inter-observer and intra-observer variability and a high false-positive rate. The aim of this study was to evaluate the value of the combined use of a deep learning-based computer-aided diagnosis (CAD) system and ultrasound elastography with conventional ultrasound (US) in increasing specificity and reducing unnecessary breast lesions biopsies. MATERIALS AND METHODS Conventional US, CAD system, and strain elastography (SE) were retrospectively performed on 216 breast lesions before biopsy or surgery. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and biopsy rate were compared between conventional US and the combination of conventional US, SE, and CAD system. RESULTS Of 216 lesions, 54 were malignant and 162 were benign. The addition of CAD system and SE to conventional US increased the AUC from 0.716 to 0.910 and specificity from 46.9% to 85.8% without a loss in sensitivity while 89.2% (66 of 74) of benign lesions in Breast Imaging Reporting and Data System (BI-RADS) category 4A lesions would avoid unnecessary biopsies. CONCLUSION The addition of CAD system and SE to conventional US improved specificity and AUC without loss of sensitivity, and reduced unnecessary biopsies.
Collapse
Affiliation(s)
- Yuqun Wang
- Department of Ultrasound Medicine, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai China
| | - Lei Tang
- Department of Ultrasound Medicine, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai China
| | - Pingping Chen
- Department of Ultrasound Medicine, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai China
| | - Man Chen
- Department of Ultrasound Medicine, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai China.
| |
Collapse
|
29
|
Gu Y, Xu W, Liu Y, An X, Li J, Cong L, Zhu L, He X, Wang H, Jiang Y. The feasibility of a novel computer-aided classification system for the characterisation and diagnosis of breast masses on ultrasound: a single-centre preliminary test study. Clin Radiol 2023:S0009-9260(23)00130-7. [PMID: 37069025 DOI: 10.1016/j.crad.2023.03.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 03/07/2023] [Accepted: 03/15/2023] [Indexed: 04/19/2023]
Abstract
AIM To introduce a novel computer-aided classification (CAC) system and investigate the feasibility of characterising and diagnosing breast masses on ultrasound (US). MATERIALS AND METHODS A total of 246 breast masses were included. US features and the final assessment categories of the breast masses were analysed by a radiologist and the CAC system according to the Breast Imaging Reporting and Data System (BI-RADS) lexicon. The CAC system evaluated the BI-RADS assessment from the fusion of multi-view and colour Doppler US images without (SmartBreast) or with combining clinical variables (m-CAC system). The diagnostic performance and agreement of US characteristics between the radiologist and the CAC system were compared. RESULTS The agreement between the radiologist and the CAC system was substantial for mass shape (κ = 0.673), orientation (κ = 0.682), margin (κ = 0.622), posterior features (κ = 0.629), calcifications in a mass (κ = 0.709) and vascularity (κ = 0.745), fair for echo pattern (κ = 0.379), and moderate for BI-RADS assessment (κ = 0.575). With BI-RADS 4a as the cut-off value, the specificity (52.5% versus 25%, p<0.0001) and accuracy (73.98% versus 62.6%, p=0.0002) of the m-CAC system were improved without significant loss of sensitivity (94.44% versus 98.41%, p=0.1250) compared with the SmartBreast. The m-CAC system showed similar specificity (52.5% versus 45.83%, p=0.2430) and accuracy (73.98% versus 73.58%, p=1.0000) as the radiologist, but a lower sensitivity (94.44% versus 100%, p=0.0156). CONCLUSION The CAC system showed an acceptable agreement with the radiologist for characterisation of breast lesions. It has the potential to mimic the decision-making behaviour of radiologists for the classification of breast lesions.
Collapse
Affiliation(s)
- Y Gu
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - W Xu
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Y Liu
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd, Beijing, China
| | - X An
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd, Beijing, China
| | - J Li
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - L Cong
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd, Beijing, China
| | - L Zhu
- Shenzhen Mindray Bio-Medical Electronics Co., Ltd, Shenzhen, China
| | - X He
- Shenzhen Mindray Bio-Medical Electronics Co., Ltd, Shenzhen, China
| | - H Wang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China.
| | - Y Jiang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China.
| |
Collapse
|
30
|
Xue P, Si M, Qin D, Wei B, Seery S, Ye Z, Chen M, Wang S, Song C, Zhang B, Ding M, Zhang W, Bai A, Yan H, Dang L, Zhao Y, Rezhake R, Zhang S, Qiao Y, Qu Y, Jiang Y. Unassisted Clinicians Versus Deep Learning-Assisted Clinicians in Image-Based Cancer Diagnostics: Systematic Review With Meta-analysis. J Med Internet Res 2023; 25:e43832. [PMID: 36862499 PMCID: PMC10020907 DOI: 10.2196/43832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 01/19/2023] [Accepted: 02/13/2023] [Indexed: 02/16/2023] Open
Abstract
BACKGROUND A number of publications have demonstrated that deep learning (DL) algorithms matched or outperformed clinicians in image-based cancer diagnostics, but these algorithms are frequently considered as opponents rather than partners. Despite the clinicians-in-the-loop DL approach having great potential, no study has systematically quantified the diagnostic accuracy of clinicians with and without the assistance of DL in image-based cancer identification. OBJECTIVE We systematically quantified the diagnostic accuracy of clinicians with and without the assistance of DL in image-based cancer identification. METHODS PubMed, Embase, IEEEXplore, and the Cochrane Library were searched for studies published between January 1, 2012, and December 7, 2021. Any type of study design was permitted that focused on comparing unassisted clinicians and DL-assisted clinicians in cancer identification using medical imaging. Studies using medical waveform-data graphics material and those investigating image segmentation rather than classification were excluded. Studies providing binary diagnostic accuracy data and contingency tables were included for further meta-analysis. Two subgroups were defined and analyzed, including cancer type and imaging modality. RESULTS In total, 9796 studies were identified, of which 48 were deemed eligible for systematic review. Twenty-five of these studies made comparisons between unassisted clinicians and DL-assisted clinicians and provided sufficient data for statistical synthesis. We found a pooled sensitivity of 83% (95% CI 80%-86%) for unassisted clinicians and 88% (95% CI 86%-90%) for DL-assisted clinicians. Pooled specificity was 86% (95% CI 83%-88%) for unassisted clinicians and 88% (95% CI 85%-90%) for DL-assisted clinicians. The pooled sensitivity and specificity values for DL-assisted clinicians were higher than for unassisted clinicians, at ratios of 1.07 (95% CI 1.05-1.09) and 1.03 (95% CI 1.02-1.05), respectively. Similar diagnostic performance by DL-assisted clinicians was also observed across the predefined subgroups. CONCLUSIONS The diagnostic performance of DL-assisted clinicians appears better than unassisted clinicians in image-based cancer identification. However, caution should be exercised, because the evidence provided in the reviewed studies does not cover all the minutiae involved in real-world clinical practice. Combining qualitative insights from clinical practice with data-science approaches may improve DL-assisted practice, although further research is required. TRIAL REGISTRATION PROSPERO CRD42021281372; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=281372.
Collapse
Affiliation(s)
- Peng Xue
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Mingyu Si
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Dongxu Qin
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Bingrui Wei
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Samuel Seery
- Faculty of Health and Medicine, Division of Health Research, Lancaster University, Lancaster, United Kingdom
| | - Zichen Ye
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Mingyang Chen
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Sumeng Wang
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Cheng Song
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Bo Zhang
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ming Ding
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Wenling Zhang
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Anying Bai
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Huijiao Yan
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Le Dang
- Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yuqian Zhao
- Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science & Technology of China, Sichuan, China
| | - Remila Rezhake
- Affiliated Cancer Hospital, The 3rd Affiliated Teaching Hospital of Xinjiang Medical University, Xinjiang, China
| | - Shaokai Zhang
- Henan Cancer Hospital, Affiliated Cancer Hospital of Zhengzhou University, Henan, China
| | - Youlin Qiao
- Center for Global Health, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yimin Qu
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yu Jiang
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
31
|
Xie L, Liu Z, Pei C, Liu X, Cui YY, He NA, Hu L. Convolutional neural network based on automatic segmentation of peritumoral shear-wave elastography images for predicting breast cancer. Front Oncol 2023; 13:1099650. [PMID: 36865812 PMCID: PMC9970986 DOI: 10.3389/fonc.2023.1099650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 01/31/2023] [Indexed: 02/16/2023] Open
Abstract
Objective Our aim was to develop dual-modal CNN models based on combining conventional ultrasound (US) images and shear-wave elastography (SWE) of peritumoral region to improve prediction of breast cancer. Method We retrospectively collected US images and SWE data of 1271 ACR- BIRADS 4 breast lesions from 1116 female patients (mean age ± standard deviation, 45.40 ± 9.65 years). The lesions were divided into three subgroups based on the maximum diameter (MD): ≤15 mm; >15 mm and ≤25 mm; >25 mm. We recorded lesion stiffness (SWV1) and 5-point average stiffness of the peritumoral tissue (SWV5). The CNN models were built based on the segmentation of different widths of peritumoral tissue (0.5 mm, 1.0 mm, 1.5 mm, 2.0 mm) and internal SWE image of the lesions. All single-parameter CNN models, dual-modal CNN models, and quantitative SWE parameters in the training cohort (971 lesions) and the validation cohort (300 lesions) were assessed by receiver operating characteristic (ROC) curve. Results The US + 1.0 mm SWE model achieved the highest area under the ROC curve (AUC) in the subgroup of lesions with MD ≤15 mm in both the training (0.94) and the validation cohorts (0.91). In the subgroups with MD between15 and 25 mm and above 25 mm, the US + 2.0 mm SWE model achieved the highest AUCs in both the training cohort (0.96 and 0.95, respectively) and the validation cohort (0.93 and 0.91, respectively). Conclusion The dual-modal CNN models based on the combination of US and peritumoral region SWE images allow accurate prediction of breast cancer.
Collapse
Affiliation(s)
- Li Xie
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Zhen Liu
- Department of Computing, Hebin Intelligent Robots Co., LTD., Hefei, China
| | - Chong Pei
- Department of Respiratory and Critical Care Medicine, The First People’s Hospital of Hefei City, The Third Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xiao Liu
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Ya-yun Cui
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Nian-an He
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China,*Correspondence: Nian-an He, ; Lei Hu,
| | - Lei Hu
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China,*Correspondence: Nian-an He, ; Lei Hu,
| |
Collapse
|
32
|
Towards precision medicine based on a continuous deep learning optimization and ensemble approach. NPJ Digit Med 2023; 6:18. [PMID: 36737644 PMCID: PMC9898519 DOI: 10.1038/s41746-023-00759-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 01/17/2023] [Indexed: 02/05/2023] Open
Abstract
We developed a continuous learning system (CLS) based on deep learning and optimization and ensemble approach, and conducted a retrospective data simulated prospective study using ultrasound images of breast masses for precise diagnoses. We extracted 629 breast masses and 2235 images from 561 cases in the institution to train the model in six stages to diagnose benign and malignant tumors, pathological types, and diseases. We randomly selected 180 out of 3098 cases from two external institutions. The CLS was tested with seven independent datasets and compared with 21 physicians, and the system's diagnostic ability exceeded 20 physicians by training stage six. The optimal integrated method we developed is expected accurately diagnose breast masses. This method can also be extended to the intelligent diagnosis of masses in other organs. Overall, our findings have potential value in further promoting the application of AI diagnosis in precision medicine.
Collapse
|
33
|
Alsharif WM. The utilization of artificial intelligence applications to improve breast cancer detection and prognosis. Saudi Med J 2023; 44:119-127. [PMID: 36773967 PMCID: PMC9987701 DOI: 10.15537/smj.2023.44.2.20220611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/13/2023] Open
Abstract
Breast imaging faces challenges with the current increase in medical imaging requests and lesions that breast screening programs can miss. Solutions to improve these challenges are being sought with the recent advancement and adoption of artificial intelligent (AI)-based applications to enhance workflow efficiency as well as patient-healthcare outcomes. rtificial intelligent tools have been proposed and used to analyze different modes of breast imaging, in most of the published studies, mainly for the detection and classification of breast lesions, breast lesion segmentation, breast density evaluation, and breast cancer risk assessment. This article reviews the background of the Conventional Computer-aided Detection system and AI, AI-based applications in breast medical imaging for the identification, segmentation, and categorization of lesions, breast density and cancer risk evaluation. In addition, the challenges, and limitations of AI-based applications in breast imaging are also discussed.
Collapse
Affiliation(s)
- Walaa M. Alsharif
- From the Diagnostic Radiology Technology Department, College of Applied Medical Sciences, Taibah University, Al Madinah Al Munawwarah; and from the Society of Artificial Intelligence in Healthcare, Riyadh, Kingdom of Saudi Arabia.
| |
Collapse
|
34
|
Xing B, Chen X, Wang Y, Li S, Liang YK, Wang D. Evaluating breast ultrasound S-detect image analysis for small focal breast lesions. Front Oncol 2022; 12:1030624. [PMID: 36582786 PMCID: PMC9792476 DOI: 10.3389/fonc.2022.1030624] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Accepted: 11/21/2022] [Indexed: 12/15/2022] Open
Abstract
Background S-Detect is a computer-assisted, artificial intelligence-based system of image analysis that has been integrated into the software of ultrasound (US) equipment and has the capacity to independently differentiate between benign and malignant focal breast lesions. Since the revision and upgrade in both the breast imaging-reporting and data system (BI-RADS) US lexicon and the S-Detect software in 2013, evidence that supports improved accuracy and specificity of radiologists' assessment of breast lesions has accumulated. However, such assessment using S-Detect technology to distinguish malignant from breast lesions with a diameter no greater than 2 cm requires further investigation. Methods The US images of focal breast lesions from 295 patients in our hospital from January 2019 to June 2022 were collected. The BI-RADS data were evaluated by the embedded program and as manually modified prior to the determination of a pathological diagnosis. The receiver operator characteristic (ROC) curves were constructed to compare the diagnostic accuracy between the assessments of the conventional US images, the S-Detect classification, and the combination of the two. Results There were 326 lesions identified in 295 patients, of which pathological confirmation demonstrated that 239 were benign and 87 were malignant. The sensitivity, specificity, and accuracy of the conventional imaging group were 75.86%, 93.31%, and 88.65%. The sensitivity, specificity, and accuracy of the S-Detect classification group were 87.36%, 88.28%, and 88.04%, respectively. The assessment of the amended combination of S-Detect with US image analysis (Co-Detect group) was improved with a sensitivity, specificity, and accuracy of 90.80%, 94.56%, and 93.56%, respectively. The diagnostic accuracy of the conventional US group, the S-Detect group, and the Co-Detect group using area under curves was 0.85, 0.88 and 0.93, respectively. The Co-Detect group had a better diagnostic efficiency compared with the conventional US group (Z = 3.882, p = 0.0001) and the S-Detect group (Z = 3.861, p = 0.0001). There was no significant difference in distinguishing benign from malignant small breast lesions when comparing conventional US and S-Detect techniques. Conclusions The addition of S-Detect technology to conventional US imaging provided a novel and feasible method to differentiate benign from malignant small breast nodules.
Collapse
Affiliation(s)
- Boyuan Xing
- Department of Ultrasound Imaging, The People’s Hospital of China Three Gorges University/the First People’s Hospital of Yichang, Yichang, Hubei, China
| | - Xiangyi Chen
- Department of Nuclear Medicine, First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Yalin Wang
- Department of Medical Engineering, Medical Supplies Center of PLA General Hospital, Beijing, China
| | - Shuang Li
- Department of Pathology, The People’s Hospital of China Three Gorges University/the First People’s Hospital of Yichang, Yichang, Hubei, China
| | - Ying-Kui Liang
- Department of Nuclear Medicine, The Sixth Medical Center of People's Liberation Army General Hospital, Beijing, China,*Correspondence: Dawei Wang, ; Ying-Kui Liang,
| | - Dawei Wang
- Department of Medical Engineering, Medical Supplies Center of PLA General Hospital, Beijing, China,Department of Nuclear Medicine, The Sixth Medical Center of People's Liberation Army General Hospital, Beijing, China,*Correspondence: Dawei Wang, ; Ying-Kui Liang,
| |
Collapse
|
35
|
Baek J, O’Connell AM, Parker KJ. Improving breast cancer diagnosis by incorporating raw ultrasound parameters into machine learning. MACHINE LEARNING: SCIENCE AND TECHNOLOGY 2022; 3:045013. [PMID: 36698865 PMCID: PMC9855672 DOI: 10.1088/2632-2153/ac9bcc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 10/15/2022] [Accepted: 10/19/2022] [Indexed: 01/28/2023] Open
Abstract
The improved diagnostic accuracy of ultrasound breast examinations remains an important goal. In this study, we propose a biophysical feature-based machine learning method for breast cancer detection to improve the performance beyond a benchmark deep learning algorithm and to furthermore provide a color overlay visual map of the probability of malignancy within a lesion. This overall framework is termed disease-specific imaging. Previously, 150 breast lesions were segmented and classified utilizing a modified fully convolutional network and a modified GoogLeNet, respectively. In this study multiparametric analysis was performed within the contoured lesions. Features were extracted from ultrasound radiofrequency, envelope, and log-compressed data based on biophysical and morphological models. The support vector machine with a Gaussian kernel constructed a nonlinear hyperplane, and we calculated the distance between the hyperplane and each feature's data point in multiparametric space. The distance can quantitatively assess a lesion and suggest the probability of malignancy that is color-coded and overlaid onto B-mode images. Training and evaluation were performed on in vivo patient data. The overall accuracy for the most common types and sizes of breast lesions in our study exceeded 98.0% for classification and 0.98 for an area under the receiver operating characteristic curve, which is more precise than the performance of radiologists and a deep learning system. Further, the correlation between the probability and Breast Imaging Reporting and Data System enables a quantitative guideline to predict breast cancer. Therefore, we anticipate that the proposed framework can help radiologists achieve more accurate and convenient breast cancer classification and detection.
Collapse
Affiliation(s)
- Jihye Baek
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States of America
| | - Avice M O’Connell
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Kevin J Parker
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States of America
| |
Collapse
|
36
|
Lee SE, Lee E, Kim EK, Yoon JH, Park VY, Youk JH, Kwak JY. Application of Artificial Intelligence Computer-Assisted Diagnosis Originally Developed for Thyroid Nodules to Breast Lesions on Ultrasound. J Digit Imaging 2022; 35:1699-1707. [PMID: 35902445 PMCID: PMC9712894 DOI: 10.1007/s10278-022-00680-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Revised: 06/27/2022] [Accepted: 07/11/2022] [Indexed: 10/16/2022] Open
Abstract
As thyroid and breast cancer have several US findings in common, we applied an artificial intelligence computer-assisted diagnosis (AI-CAD) software originally developed for thyroid nodules to breast lesions on ultrasound (US) and evaluated its diagnostic performance. From January 2017 to December 2017, 1042 breast lesions (mean size 20.2 ± 11.8 mm) of 1001 patients (mean age 45.9 ± 12.9 years) who underwent US-guided core-needle biopsy were included. An AI-CAD software that was previously trained and validated with thyroid nodules using the convolutional neural network was applied to breast nodules. There were 665 benign breast lesions (63.0%) and 391 breast cancers (37.0%). The area under the receiver operating characteristic curve (AUROC) of AI-CAD to differentiate breast lesions was 0.678 (95% confidence interval: 0.649, 0.707). After fine-tuning AI-CAD with 1084 separate breast lesions, the diagnostic performance of AI-CAD markedly improved (AUC 0.841). This was significantly higher than that of radiologists when the cutoff category was BI-RADS 4a (AUC 0.621, P < 0.001), but lower when the cutoff category was BI-RADS 4b (AUC 0.908, P < 0.001). When applied to breast lesions, the diagnostic performance of an AI-CAD software that had been developed for differentiating malignant and benign thyroid nodules was not bad. However, an organ-specific approach guarantees better diagnostic performance despite the similar US features of thyroid and breast malignancies.
Collapse
Affiliation(s)
- Si Eun Lee
- Department of Radiology, Yongin Severance Hospital, Research Institute of Radiological Science, Yonsei University College of Medicine, Seoul, Korea
| | - Eunjung Lee
- Department of Computational Science and Engineering, Yonsei University, Seoul, Korea
| | - Eun-Kyung Kim
- Department of Radiology, Yongin Severance Hospital, Research Institute of Radiological Science, Yonsei University College of Medicine, Seoul, Korea
| | - Jung Hyun Yoon
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Vivian Youngjean Park
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Ji Hyun Youk
- Department of Radiology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Jin Young Kwak
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea.
| |
Collapse
|
37
|
Marini TJ, Castaneda B, Parker K, Baran TM, Romero S, Iyer R, Zhao YT, Hah Z, Park MH, Brennan G, Kan J, Meng S, Dozier A, O’Connell A. No sonographer, no radiologist: Assessing accuracy of artificial intelligence on breast ultrasound volume sweep imaging scans. PLOS DIGITAL HEALTH 2022; 1:e0000148. [PMID: 36812553 PMCID: PMC9931251 DOI: 10.1371/journal.pdig.0000148] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 10/21/2022] [Indexed: 05/12/2023]
Abstract
Breast ultrasound provides a first-line evaluation for breast masses, but the majority of the world lacks access to any form of diagnostic imaging. In this pilot study, we assessed the combination of artificial intelligence (Samsung S-Detect for Breast) with volume sweep imaging (VSI) ultrasound scans to evaluate the possibility of inexpensive, fully automated breast ultrasound acquisition and preliminary interpretation without an experienced sonographer or radiologist. This study was conducted using examinations from a curated data set from a previously published clinical study of breast VSI. Examinations in this data set were obtained by medical students without prior ultrasound experience who performed VSI using a portable Butterfly iQ ultrasound probe. Standard of care ultrasound exams were performed concurrently by an experienced sonographer using a high-end ultrasound machine. Expert-selected VSI images and standard of care images were input into S-Detect which output mass features and classification as "possibly benign" and "possibly malignant." Subsequent comparison of the S-Detect VSI report was made between 1) the standard of care ultrasound report by an expert radiologist, 2) the standard of care ultrasound S-Detect report, 3) the VSI report by an expert radiologist, and 4) the pathological diagnosis. There were 115 masses analyzed by S-Detect from the curated data set. There was substantial agreement of the S-Detect interpretation of VSI among cancers, cysts, fibroadenomas, and lipomas to the expert standard of care ultrasound report (Cohen's κ = 0.73 (0.57-0.9 95% CI), p<0.0001), the standard of care ultrasound S-Detect interpretation (Cohen's κ = 0.79 (0.65-0.94 95% CI), p<0.0001), the expert VSI ultrasound report (Cohen's κ = 0.73 (0.57-0.9 95% CI), p<0.0001), and the pathological diagnosis (Cohen's κ = 0.80 (0.64-0.95 95% CI), p<0.0001). All pathologically proven cancers (n = 20) were designated as "possibly malignant" by S-Detect with a sensitivity of 100% and specificity of 86%. Integration of artificial intelligence and VSI could allow both acquisition and interpretation of ultrasound images without a sonographer and radiologist. This approach holds potential for increasing access to ultrasound imaging and therefore improving outcomes related to breast cancer in low- and middle- income countries.
Collapse
Affiliation(s)
- Thomas J. Marini
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
- * E-mail:
| | - Benjamin Castaneda
- Departamento de Ingeniería, Pontificia Universidad Católica del Perú, Lima, Peru
| | - Kevin Parker
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Timothy M. Baran
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Stefano Romero
- Departamento de Ingeniería, Pontificia Universidad Católica del Perú, Lima, Peru
| | - Radha Iyer
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Yu T. Zhao
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Zaegyoo Hah
- Samsung Medison Co., Ltd., Seoul, Republic of Korea
| | - Moon Ho Park
- Samsung Electronics Co., Ltd., Seoul, Republic of Korea
| | - Galen Brennan
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Jonah Kan
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Steven Meng
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Ann Dozier
- Department of Public Health, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Avice O’Connell
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, New York, United States of America
| |
Collapse
|
38
|
Wang R, Fu G, Li J, Pei Y. Diagnosis after zooming in: A multilabel classification model by imitating doctor reading habits to diagnose brain diseases. Med Phys 2022; 49:7054-7070. [PMID: 35880443 DOI: 10.1002/mp.15871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Revised: 03/18/2022] [Accepted: 06/28/2022] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Computed tomography (CT) has the advantages of being low cost and noninvasive and is a primary diagnostic method for brain diseases. However, it is a challenge for junior radiologists to diagnose CT images accurately and comprehensively. It is necessary to build a system that can help doctors diagnose and provide an explanation of the predictions. Despite the success of deep learning algorithms in the field of medical image analysis, the task of brain disease classification still faces challenges: Researchers lack attention to complex manual labeling requirements and the incompleteness of prediction explanations. More importantly, most studies only measure the performance of the algorithm, but do not measure the effectiveness of the algorithm in the actual diagnosis of doctors. METHODS In this paper, we propose a model called DrCT2 that can detect brain diseases without using image-level labels and provide a more comprehensive explanation at both the slice and sequence levels. This model achieves reliable performance by imitating human expert reading habits: targeted scaling of primary images from the full slice scans and observation of suspicious lesions for diagnosis. We evaluated our model on two open-access data sets: CQ500 and the RSNA Intracranial Hemorrhage Detection Challenge. In addition, we defined three tasks to comprehensively evaluate model interpretability by measuring whether the algorithm can select key images with lesions. To verify the algorithm from the perspective of practical application, three junior radiologists were invited to participate in the experiments, comparing the effects before and after human-computer cooperation in different aspects. RESULTS The method achieved F1-scores of 0.9370 on CQ500 and 0.8700 on the RSNA data set. The results show that our model has good interpretability under the premise of good performance. Human radiologist evaluation experiments have proven that our model can effectively improve the accuracy of the diagnosis and improve efficiency. CONCLUSIONS We proposed a model that can simultaneously detect multiple brain diseases. The report generated by the model can assist doctors in avoiding missed diagnoses, and it has good clinical application value.
Collapse
Affiliation(s)
- Ruiqian Wang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Guanghui Fu
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié Salpêtrière, F-75013, Paris, France
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu, Japan
| |
Collapse
|
39
|
Madani M, Behzadi MM, Nabavi S. The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review. Cancers (Basel) 2022; 14:5334. [PMID: 36358753 PMCID: PMC9655692 DOI: 10.3390/cancers14215334] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 12/02/2022] Open
Abstract
Breast cancer is among the most common and fatal diseases for women, and no permanent treatment has been discovered. Thus, early detection is a crucial step to control and cure breast cancer that can save the lives of millions of women. For example, in 2020, more than 65% of breast cancer patients were diagnosed in an early stage of cancer, from which all survived. Although early detection is the most effective approach for cancer treatment, breast cancer screening conducted by radiologists is very expensive and time-consuming. More importantly, conventional methods of analyzing breast cancer images suffer from high false-detection rates. Different breast cancer imaging modalities are used to extract and analyze the key features affecting the diagnosis and treatment of breast cancer. These imaging modalities can be divided into subgroups such as mammograms, ultrasound, magnetic resonance imaging, histopathological images, or any combination of them. Radiologists or pathologists analyze images produced by these methods manually, which leads to an increase in the risk of wrong decisions for cancer detection. Thus, the utilization of new automatic methods to analyze all kinds of breast screening images to assist radiologists to interpret images is required. Recently, artificial intelligence (AI) has been widely utilized to automatically improve the early detection and treatment of different types of cancer, specifically breast cancer, thereby enhancing the survival chance of patients. Advances in AI algorithms, such as deep learning, and the availability of datasets obtained from various imaging modalities have opened an opportunity to surpass the limitations of current breast cancer analysis methods. In this article, we first review breast cancer imaging modalities, and their strengths and limitations. Then, we explore and summarize the most recent studies that employed AI in breast cancer detection using various breast imaging modalities. In addition, we report available datasets on the breast-cancer imaging modalities which are important in developing AI-based algorithms and training deep learning models. In conclusion, this review paper tries to provide a comprehensive resource to help researchers working in breast cancer imaging analysis.
Collapse
Affiliation(s)
- Mohammad Madani
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Mohammad Mahdi Behzadi
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|
40
|
Automated fracture screening using an object detection algorithm on whole-body trauma computed tomography. Sci Rep 2022; 12:16549. [PMID: 36192521 PMCID: PMC9529907 DOI: 10.1038/s41598-022-20996-w] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 09/21/2022] [Indexed: 11/28/2022] Open
Abstract
The emergency department is an environment with a potential risk for diagnostic errors during trauma care, particularly for fractures. Convolutional neural network (CNN) deep learning methods are now widely used in medicine because they improve diagnostic accuracy, decrease misinterpretation, and improve efficiency. In this study, we investigated whether automatic localization and classification using CNN could be applied to pelvic, rib, and spine fractures. We also examined whether this fracture detection algorithm could help physicians in fracture diagnosis. A total of 7664 whole-body CT axial slices (chest, abdomen, pelvis) from 200 patients were used. Sensitivity, precision, and F1-score were calculated to evaluate the performance of the CNN model. For the grouped mean values for pelvic, spine, or rib fractures, the sensitivity was 0.786, precision was 0.648, and F1-score was 0.711. Moreover, with CNN model assistance, surgeons showed improved sensitivity for detecting fractures and the time of reading and interpreting CT scans was reduced, especially for less experienced orthopedic surgeons. Application of the CNN model may lead to reductions in missed fractures from whole-body CT images and to faster workflows and improved patient care through efficient diagnosis in polytrauma patients.
Collapse
|
41
|
Kaplan E, Chan WY, Dogan S, Barua PD, Bulut HT, Tuncer T, Cizik M, Tan RS, Acharya UR. Automated BI-RADS classification of lesions using pyramid triple deep feature generator technique on breast ultrasound images. Med Eng Phys 2022; 108:103895. [DOI: 10.1016/j.medengphy.2022.103895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 09/09/2022] [Accepted: 09/13/2022] [Indexed: 10/14/2022]
|
42
|
Muacevic A, Adler JR. Mammographic and Ultrasonographic Imaging Analysis for Neoadjuvant Chemotherapy Evaluation: Volume Reduction Indexes That Correlate With Pathological Complete Response. Cureus 2022; 14:e29960. [PMID: 36225243 PMCID: PMC9534532 DOI: 10.7759/cureus.29960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/05/2022] [Indexed: 12/03/2022] Open
Abstract
INTRODUCTION We aimed to evaluate volume reduction in digital mammography (DM) and ultrasound (US) for neoadjuvant chemotherapy (NAC) evaluation, with breast cancer-specific survival and pathological complete response (pCR) associations. METHODS This is a retrospective observational cohort study analyzing recorded images in 122 selected subjects out of which 569 patients presented with advanced breast cancers. Spearman's correlation and generalized estimating equations (GEE) compared volume reduction on DM and US between pCR and non-pCR after NAC with post-surgical anatomopathology. Cox regression and Kaplan-Meier curves analyzed associations between cancer-specific survival, pCR, and volume reductions. RESULTS A total of 34.4% (N=42) obtained pCR and 65.6% (N=80) did not. Minimum percentage indexes needed to correlate with pCR over time were, at least, 28.9% for DM (p=0.006) and 10.36% for US (p=0.046), with high specificity (US=98%, DM=93%) but low sensitivity (US=7%, DM=18%). Positive predictive values were 82% (DM) and 86% (US) and negative predictive values were 37% (DM) and 36% (US). Cox regression and Kaplan-Meier curves demonstrated associations of breast cancer-specific survival with pCR (Cox regression coefficient {B}=0.209, CI 95%=0.048-0.914, p=0.038). CONCLUSIONS At least 28.9% of volume reduction on DM and 10.36% of volume reduction on US are correlated with pCR. Furthermore, pCR was associated with breast cancer-specific survival after NAC in volumetric morphological imaging analysis.
Collapse
|
43
|
Xu Z, Wang Y, Chen M, Zhang Q. Multi-region radiomics for artificially intelligent diagnosis of breast cancer using multimodal ultrasound. Comput Biol Med 2022; 149:105920. [DOI: 10.1016/j.compbiomed.2022.105920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 07/06/2022] [Accepted: 07/30/2022] [Indexed: 11/03/2022]
|
44
|
Liu GS, Huang PY, Wen ML, Zhuang SS, Hua J, He XP. Application of endoscopic ultrasonography for detecting esophageal lesions based on convolutional neural network. World J Gastroenterol 2022; 28:2457-2467. [PMID: 35979257 PMCID: PMC9258283 DOI: 10.3748/wjg.v28.i22.2457] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 07/27/2021] [Accepted: 04/28/2022] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND A convolutional neural network (CNN) is a deep learning algorithm based on the principle of human brain visual cortex processing and image recognition.
AIM To automatically identify the invasion depth and origin of esophageal lesions based on a CNN.
METHODS A total of 1670 white-light images were used to train and validate the CNN system. The method proposed in this paper included the following two parts: (1) Location module, an object detection network, locating the classified main image feature regions of the image for subsequent classification tasks; and (2) Classification module, a traditional classification CNN, classifying the images cut out by the object detection network.
RESULTS The CNN system proposed in this study achieved an overall accuracy of 82.49%, sensitivity of 80.23%, and specificity of 90.56%. In this study, after follow-up pathology, 726 patients were compared for endoscopic pathology. The misdiagnosis rate of endoscopic diagnosis in the lesion invasion range was approximately 9.5%; 41 patients showed no lesion invasion to the muscularis propria, but 36 of them pathologically showed invasion to the superficial muscularis propria. The patients with invasion of the tunica adventitia were all treated by surgery with an accuracy rate of 100%. For the examination of submucosal lesions, the accuracy of endoscopic ultrasonography (EUS) was approximately 99.3%. Results of this study showed that EUS had a high accuracy rate for the origin of submucosal lesions, whereas the misdiagnosis rate was slightly high in the evaluation of the invasion scope of lesions. Misdiagnosis could be due to different operating and diagnostic levels of endoscopists, unclear ultrasound probes, and unclear lesions.
CONCLUSION This study is the first to recognize esophageal EUS images through deep learning, which can automatically identify the invasion depth and lesion origin of submucosal tumors and classify such tumors, thereby achieving good accuracy. In future studies, this method can provide guidance and help to clinical endoscopists.
Collapse
Affiliation(s)
- Gao-Shuang Liu
- Department of Gastroenterology, Nanjing BenQ Medical Center, The Affiliated BenQ Hospital of Nanjing Medical University, Nanjing 210000, Jiangsu Province, China
| | - Pei-Yun Huang
- Department of Geriatric Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210000, Jiangsu Province, China
| | - Min-Li Wen
- School of Computer Science and Engineering, Southeast University, Nanjing 211102, Jiangsu Province, China
| | - Shuai-Shuai Zhuang
- Department of Geriatric Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210000, Jiangsu Province, China
| | - Jie Hua
- Department of Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210000, Jiangsu Province, China
| | - Xiao-Pu He
- Department of Geriatric Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210000, Jiangsu Province, China
| |
Collapse
|
45
|
Wei Q, Yan YJ, Wu GG, Ye XR, Jiang F, Liu J, Wang G, Wang Y, Song J, Pan ZP, Hu JH, Jin CY, Wang X, Dietrich CF, Cui XW. The diagnostic performance of ultrasound computer-aided diagnosis system for distinguishing breast masses: a prospective multicenter study. Eur Radiol 2022; 32:4046-4055. [PMID: 35066633 DOI: 10.1007/s00330-021-08452-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Revised: 10/11/2021] [Accepted: 10/31/2021] [Indexed: 12/31/2022]
Abstract
OBJECTIVES To evaluate the diagnostic value of computer-aided diagnosis (CAD) software on ultrasound in distinguishing benign and malignant breast masses and avoiding unnecessary biopsy. METHODS This prospective, multicenter study included patients who were scheduled for pathological diagnosis of breast masses between April 2019 and November 2020. Ultrasound images, videos, CAD analysis, and BI-RADS were obtained. The AUC, accuracy, sensitivity, specificity, PPV, and NPV were calculated and compared with radiologists. RESULTS Overall, 901 breast masses in 901 patients were enrolled in this study. The accuracy, sensitivity, specificity, PPV and NPV of CAD software were 89.6%, 94.2%, 87.0%, 80.4%, and 96.3, respectively, in the long-axis section; 89.0%, 91.4%, 87.7%, 80.8%, and 94.7%, respectively, in the short-axis section. With BI-RADS 4a as the cut-off value, CAD software has a higher AUC (0.906 vs 0.734 vs 0.696, all p < 0.001) than both experienced and less experienced radiologists. With BI-RADS 4b as the cut-off value, CAD software showed better AUC than less experienced radiologists (0.906 vs 0.874, p < 0.001), but not superior to experienced radiologists (0.906 vs 0.883, p = 0.057). After the application of CAD software, the unnecessary biopsy rate of BI-RADS categories 4 and 5 was significantly decreased (33.0% vs 11.9%, 37.8% vs 14.5%), and the malignant rate of biopsy in category 4a was significantly increased (11.6% vs 40.7%, 7.4% vs 34.9%, all p < 0.001). CONCLUSIONS CAD software on ultrasound can be used as an effective auxiliary diagnostic tool for differential diagnosis of benign and malignant breast masses and reducing unnecessary biopsy. CLINICAL TRIAL REGISTRATION ClinicalTrials.gov (NCT03887598) KEY POINTS: • Prospective multicenter study showed that computer-aided diagnosis software provides greater diagnostic confidence for differentiating benign and malignant breast masses. • Computer-aided diagnosis software can help radiologists reduce unnecessary biopsy. • The management of patients with breast masses becomes more appropriate.
Collapse
Affiliation(s)
- Qi Wei
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, No. 1095, Jiefang Avenue, Wuhan, 430030, Hubei Province, China
| | - Yu-Jing Yan
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, No. 1095, Jiefang Avenue, Wuhan, 430030, Hubei Province, China
| | - Ge-Ge Wu
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, No. 1095, Jiefang Avenue, Wuhan, 430030, Hubei Province, China
| | - Xi-Rong Ye
- Department of Medical Ultrasound, The Central Hospital of EDong Healthcare, Huangshi, 435000, Hubei Province, China
| | - Fan Jiang
- Department of Medical Ultrasound, The Second Hospital of Anhui Medical University, Hefei, 230601, Anhui Province, China
| | - Jie Liu
- Department of Medical Ultrasound, Yichang General Hospital, Renmin Hospital of Three Gorges University, Hubei Province, Yichang, 443099, China
| | - Gang Wang
- Department of Medical Ultrasound, Taizhou Hospital of Zhejiang Province, Linhai, 318000, Zhejiang Province, China
| | - Yi Wang
- Department of Medical Ultrasound, Macheng People's Hospital, Macheng, 438300, Hubei Province, China
| | - Juan Song
- Department of Medical Ultrasound, Xiangyang No. 1 People's Hospital, Affiliated Hospital of Hubei University of Medicine, Xiangyang, 441000, Hubei Province, China
| | - Zhi-Ping Pan
- Department of Medical Ultrasound, Yixing Traditional Chinese Medicine Hospital, Yixing, 214200, Jiangsu Province, China
| | - Jin-Hua Hu
- Department of Medical Ultrasound, Anqing First People's Hospital of Anhui Medical University, Anqing, 246052, Anhui Province, China
| | - Chao-Ying Jin
- Department of Medical Ultrasound, Taizhou Hospital of Zhejiang Province, Linhai, 318000, Zhejiang Province, China
| | - Xiang Wang
- Department of Medical Ultrasound, Macheng People's Hospital, Macheng, 438300, Hubei Province, China
| | - Christoph F Dietrich
- Department of Internal Medicine, Hirslanden Clinic, Schänzlihalde 11, 3013, Bern, Switzerland
| | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, No. 1095, Jiefang Avenue, Wuhan, 430030, Hubei Province, China.
| |
Collapse
|
46
|
Wei Q, Zeng SE, Wang LP, Yan YJ, Wang T, Xu JW, Zhang MY, Lv WZ, Dietrich CF, Cui XW. The Added Value of a Computer-Aided Diagnosis System in Differential Diagnosis of Breast Lesions by Radiologists With Different Experience. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2022; 41:1355-1363. [PMID: 34432320 DOI: 10.1002/jum.15816] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 07/20/2021] [Accepted: 07/28/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVES To evaluate the value of the computer-aided diagnosis system, S-Detect (based on deep learning algorithm), in distinguishing benign and malignant breast masses and reducing unnecessary biopsy based on the experience of radiologists. METHODS From February 2018 to March 2019, 266 breast masses in 192 women were included in our study. Ultrasound (US) examination, including S-Detect technique, was performed by the radiologist with about 10 years of clinical experience in breast US imaging. US images were analyzed by four other radiologists with different experience in breast imaging (radiologists 1, 2, 3, and 4 with 1, 4, 9, and 20 years, respectively) according to their clinical experience (with and without the results of S-Detect). Diagnostic capabilities and unnecessary biopsy of radiologists and radiologists combined with S-Detect were compared and analyzed. RESULTS After referring to the results of S-Detect, the changes made by less experienced radiologists were greater than experienced radiologists (benign or malignant, 44 vs 22 vs 14 vs 2; unnecessary biopsy, 34 vs 25 vs 10 vs 5). When combined with S-Detect, less experienced radiologists showed significant improvement in accuracy, specificity, positive predictive value, negative predictive value, and area under curve (P < .05), but not for experienced radiologists (P > .05). Similarly, the unnecessary biopsy rate of less experienced radiologists decreased significantly (44.4% vs 32.7%, P = .006; 36.8% vs 28.2%, P = .033), but not for experienced radiologists (P > .05). CONCLUSIONS Less experienced radiologists rely more on S-Detect software. And S-Detect can be an effective decision-making tool for breast US, especially for less experienced radiologists.
Collapse
Affiliation(s)
- Qi Wei
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Shu-E Zeng
- Department of Medical Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Li-Ping Wang
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yu-Jing Yan
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ting Wang
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Jian-Wei Xu
- Department of Medical Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Meng-Yi Zhang
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Wen-Zhi Lv
- Department of Artificial Intelligence, Julei Technology, Wuhan, China
| | - Christoph F Dietrich
- Department Allgemeine Innere Medizin (DAIM), Kliniken Hirslanden Beau Site, Salem und Permancence, Bern, Switzerland
| | - Xin-Wu Cui
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
47
|
Lee SE, Han K, Youk JH, Lee JE, Hwang JY, Rho M, Yoon J, Kim EK, Yoon JH. Differing benefits of artificial intelligence-based computer-aided diagnosis (AI-CAD) for breast US according to workflow and experience level. Ultrasonography 2022; 41:718-727. [PMID: 35850498 PMCID: PMC9532201 DOI: 10.14366/usg.22014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 03/30/2022] [Indexed: 11/10/2022] Open
Abstract
Purpose This study evaluated how artificial intelligence-based computer-assisted diagnosis (AI-CAD) for breast ultrasonography (US) influences diagnostic performance and agreement between radiologists with varying experience levels in different workflows. Methods Images of 492 breast lesions (200 malignant and 292 benign masses) in 472 women taken from April 2017 to June 2018 were included. Six radiologists (three inexperienced [<1 year of experience] and three experienced [10-15 years of experience]) individually reviewed US images with and without the aid of AI-CAD, first sequentially and then simultaneously. Diagnostic performance and interobserver agreement were calculated and compared between radiologists and AI-CAD. Results After implementing AI-CAD, the specificity, positive predictive value (PPV), and accuracy significantly improved, regardless of experience and workflow (all P<0.001, respectively). The overall area under the receiver operating characteristic curve significantly increased in simultaneous reading, but only for inexperienced radiologists. The agreement for Breast Imaging Reporting and Database System (BI-RADS) descriptors generally increased when AI-CAD was used (κ=0.29-0.63 to 0.35-0.73). Inexperienced radiologists tended to concede to AI-CAD results more easily than experienced radiologists, especially in simultaneous reading (P<0.001). The conversion rates for final assessment changes from BI-RADS 2 or 3 to BI-RADS higher than 4a or vice versa were also significantly higher in simultaneous reading than sequential reading (overall, 15.8% and 6.2%, respectively; P<0.001) for both inexperienced and experienced radiologists. Conclusion Using AI-CAD to interpret breast US improved the specificity, PPV, and accuracy of radiologists regardless of experience level. AI-CAD may work better in simultaneous reading to improve diagnostic performance and agreement between radiologists, especially for inexperienced radiologists.
Collapse
Affiliation(s)
- Si Eun Lee
- Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
- Department of Radiology, Research Institute of Radiological Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Korea
| | - Kyunghwa Han
- Department of Radiology, Research Institute of Radiological Science, Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, Korea
| | - Ji Hyun Youk
- Department of Radiology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Jee Eun Lee
- Department of Radiology, Ewha Womans University College of Medicine, Seoul, Korea
| | - Ji-Young Hwang
- Department of Radiology, Kangnam Sacred Heart Hospital, Hallym University College of Medicine, Seoul, Korea
| | - Miribi Rho
- Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Jiyoung Yoon
- Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Eun-Kyung Kim
- Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
- Department of Radiology, Research Institute of Radiological Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Korea
| | - Jung Hyun Yoon
- Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
- Correspondence to: Jung Hyun Yoon, MD, PhD, Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul 03722, Korea Tel. +82-2-2228-7400 Fax. +82-2-2227-8337 E-mail:
| |
Collapse
|
48
|
Balkenende L, Teuwen J, Mann RM. Application of Deep Learning in Breast Cancer Imaging. Semin Nucl Med 2022; 52:584-596. [PMID: 35339259 DOI: 10.1053/j.semnuclmed.2022.02.003] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 02/15/2022] [Accepted: 02/16/2022] [Indexed: 11/11/2022]
Abstract
This review gives an overview of the current state of deep learning research in breast cancer imaging. Breast imaging plays a major role in detecting breast cancer at an earlier stage, as well as monitoring and evaluating breast cancer during treatment. The most commonly used modalities for breast imaging are digital mammography, digital breast tomosynthesis, ultrasound and magnetic resonance imaging. Nuclear medicine imaging techniques are used for detection and classification of axillary lymph nodes and distant staging in breast cancer imaging. All of these techniques are currently digitized, enabling the possibility to implement deep learning (DL), a subset of Artificial intelligence, in breast imaging. DL is nowadays embedded in a plethora of different tasks, such as lesion classification and segmentation, image reconstruction and generation, cancer risk prediction, and prediction and assessment of therapy response. Studies show similar and even better performances of DL algorithms compared to radiologists, although it is clear that large trials are needed, especially for ultrasound and magnetic resonance imaging, to exactly determine the added value of DL in breast cancer imaging. Studies on DL in nuclear medicine techniques are only sparsely available and further research is mandatory. Legal and ethical issues need to be considered before the role of DL can expand to its full potential in clinical breast care practice.
Collapse
Affiliation(s)
- Luuk Balkenende
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jonas Teuwen
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; Department of Radiation Oncology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands
| | - Ritse M Mann
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands.
| |
Collapse
|
49
|
Zhu Y, Zhan W, Jia X, Liu J, Zhou J. Clinical Application of Computer-Aided Diagnosis for Breast Ultrasonography: Factors That Lead to Discordant Results in Radial and Antiradial Planes. Cancer Manag Res 2022; 14:751-760. [PMID: 35237075 PMCID: PMC8882474 DOI: 10.2147/cmar.s348463] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 01/27/2022] [Indexed: 01/30/2023] Open
Affiliation(s)
- Ying Zhu
- Department of Ultrasound, Shanghai Ruijin Hospital Affiliated to Medical School of Shanghai Jiaotong University, Shanghai, People’s Republic of China
| | - Weiwei Zhan
- Department of Ultrasound, Shanghai Ruijin Hospital Affiliated to Medical School of Shanghai Jiaotong University, Shanghai, People’s Republic of China
| | - Xiaohong Jia
- Department of Ultrasound, Shanghai Ruijin Hospital Affiliated to Medical School of Shanghai Jiaotong University, Shanghai, People’s Republic of China
| | - Juan Liu
- Department of Ultrasound, Shanghai Ruijin Hospital Affiliated to Medical School of Shanghai Jiaotong University, Shanghai, People’s Republic of China
| | - Jianqiao Zhou
- Department of Ultrasound, Shanghai Ruijin Hospital Affiliated to Medical School of Shanghai Jiaotong University, Shanghai, People’s Republic of China
- Correspondence: Jianqiao Zhou, Department of Ultrasound, Shanghai Ruijin Hospital Affiliated to Medical School of Shanghai Jiaotong University, 197 Ruijin Er Road, Shanghai, 200025, People’s Republic of China, Email
| |
Collapse
|
50
|
Ha SM, Kim HK, Kim Y, Noh DY, Han W, Chang JM. Diagnostic performance improvement with combined use of proteomics biomarker assay and breast ultrasound. Breast Cancer Res Treat 2022; 192:541-552. [PMID: 35084623 DOI: 10.1007/s10549-022-06527-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 01/16/2022] [Indexed: 11/27/2022]
Abstract
PURPOSE To investigate the combined use of blood-based 3-protein signature and breast ultrasound (US) for validating US-detected lesions. METHODS From July 2011 to April 2020, women who underwent whole-breast US within at least 6 months from sampling period were retrospectively included. Blood-based 3-protein signature (Mastocheck®) value and US findings were evaluated. Following outcome measures were compared between US alone and the combination of Mastocheck® value with US: sensitivity, specificity, positive predictive value (PPV), negative predictive value, area under the receiver operating characteristic curve (AUC), and biopsy rate. RESULTS Among the 237 women included, 59 (24.9%) were healthy individuals and 178 (75.1%) cancer patients. Mean size of cancers was 1.2 ± 0.8 cm. Median value of Mastocheck® was significantly different between nonmalignant (- 0.24, interquartile range [IQR] - 0.48, - 0.03) and malignant lesions (0.55, IQR - 0.03, 1.42) (P < .001). Utilizing Mastocheck® value with US increased the AUC from 0.67 (95% confidence interval [CI] 0.61, 0.73) to 0.81 (95% CI 0.75, 0.88; P < .001), and specificity from 35.6 (95% CI 23.4, 47.8) to 64.4% (95% CI 52.2, 76.6; P < .001) without loss in sensitivity. PPV was increased from 82.2 (95% CI 77.1, 87.3) to 89.3% (95% CI 85.0, 93.6; P < .001), and biopsy rate was significantly decreased from 79.3 (188/237) to 72.1% (171/237) (P < .001). Consistent improvements in specificity, PPV, and AUC were observed in asymptomatic women, in women with dense breast, and in those with normal/benign mammographic findings. CONCLUSION Mastocheck® is an effective tool that can be used with US to improve diagnostic specificity and reduce false-positive findings and unnecessary biopsies.
Collapse
Affiliation(s)
- Su Min Ha
- Department of Radiology, Seoul National University College of Medicine and Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 110-744, Republic of Korea
| | - Hong-Kyu Kim
- Department of Surgery, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Republic of Korea
| | - Yumi Kim
- Department of Surgery, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Republic of Korea
- Department of Surgery, CHA University Gangnam Medical Center, Seoul, Republic of Korea
| | - Dong-Young Noh
- Department of Surgery, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Republic of Korea
- Department of Surgery, CHA University Gangnam Medical Center, Seoul, Republic of Korea
| | - Wonshik Han
- Department of Surgery, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Republic of Korea
- Cancer Research Institute, Seoul National University, Seoul, Korea
| | - Jung Min Chang
- Department of Radiology, Seoul National University College of Medicine and Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 110-744, Republic of Korea.
| |
Collapse
|