1
|
Cho Y, Park EK, Chang Y, Kwon MR, Kim EY, Kim M, Park B, Lee S, Jeong HE, Kim KH, Kim TS, Lee H, Kwon R, Lim GY, Choi J, Kook SH, Ryu S. Concordant and discordant breast density patterns by different approaches for assessing breast density and breast cancer risk. Breast Cancer Res Treat 2025; 210:105-114. [PMID: 39482557 DOI: 10.1007/s10549-024-07541-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Accepted: 10/22/2024] [Indexed: 11/03/2024]
Abstract
PURPOSE To examine the discrepancy in breast density assessments by radiologists, LIBRA software, and AI algorithm and their association with breast cancer risk. METHODS Among 74,610 Korean women aged ≥ 34 years, who underwent screening mammography, density estimates obtained from both LIBRA and the AI algorithm were compared to radiologists using BI-RADS density categories (A-D, designating C and D as dense breasts). The breast cancer risks were compared according to concordant or discordant dense breasts identified by radiologists, LIBRA, and AI. Cox-proportional hazards models were used to determine adjusted hazard ratios (aHRs) [95% confidence intervals (CIs)]. RESULTS During a median follow-up of 9.9 years, 479 breast cancer cases developed. Compared to the reference non-dense breast group, the aHRs (95% CIs) for breast cancer were 2.37 (1.68-3.36) for radiologist-classified dense breasts, 1.30 (1.05-1.62) for LIBRA, and 2.55 (1.84-3.56) for AI. For different combinations of breast density assessment, aHRs (95% CI) for breast cancer were 2.40 (1.69-3.41) for radiologist-dense/LIBRA-non-dense, 11.99 (1.64-87.62) for radiologist-non-dense/LIBRA-dense, and 2.99 (1.99-4.50) for both dense breasts, compared to concordant non-dense breasts. Similar trends were observed with radiologists/AI classification: the aHRs (95% CI) were 1.79 (1.02-3.12) for radiologist-dense/AI-non-dense, 2.43 (1.24-4.78) for radiologist-non-dense/AI-dense, and 3.23 (2.15-4.86) for both dense breasts. CONCLUSION The risk of breast cancer was highest in concordant dense breasts. Discordant dense breast cases also had a significantly higher risk of breast cancer, especially when identified as dense by either AI or LIBRA, but not radiologists, compared to concordant non-dense breast cases.
Collapse
Affiliation(s)
- Yoosun Cho
- Center for Cohort Studies, Total Healthcare Center, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
- Department of Family Medicine, Chung-Ang University Gwangmyeong Hospital, Chung-Ang University College of Medicine, Gwangmyeong, South Korea
| | - Eun Kyung Park
- Lunit, Seoul, Republic of Korea
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Yoosoo Chang
- Center for Cohort Studies, Total Healthcare Center, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
- Department of Occupational and Environmental Medicine, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Samsung Main Building B2, 250, Taepyung-ro 2Ga, Jung-gu, Seoul, 04514, Republic of Korea.
- Department of Clinical Research Design & Evaluation, Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, Republic of Korea.
| | - Mi-Ri Kwon
- Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Eun Young Kim
- Department of Surgery, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Minjeong Kim
- Lunit, Seoul, Republic of Korea
- Department of Statistics, Ewha Womans University, Seoul, Republic of Korea
| | - Boyoung Park
- Department of Preventive Medicine, Hanyang University College of Medicine, Seoul, Republic of Korea
- Hanyang Institute of Bioscience and Biotechnology, Hanyang University, Seoul, Republic of Korea
| | | | | | | | | | | | - Ria Kwon
- Center for Cohort Studies, Total Healthcare Center, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
- Institute of Medical Research, Sungkyunkwan University School of Medicine, Suwon, Republic of Korea
| | - Ga-Young Lim
- Center for Cohort Studies, Total Healthcare Center, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
- Institute of Medical Research, Sungkyunkwan University School of Medicine, Suwon, Republic of Korea
| | - JunHyeok Choi
- School of Mechanical Engineering, Sungkyunkwan University, Seoul, Republic of Korea
| | - Shin Ho Kook
- Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Seungho Ryu
- Center for Cohort Studies, Total Healthcare Center, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
- Department of Occupational and Environmental Medicine, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Samsung Main Building B2, 250, Taepyung-ro 2Ga, Jung-gu, Seoul, 04514, Republic of Korea.
- Department of Clinical Research Design & Evaluation, Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, Republic of Korea.
| |
Collapse
|
2
|
Luo L, Wang X, Lin Y, Ma X, Tan A, Chan R, Vardhanabhuti V, Chu WC, Cheng KT, Chen H. Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions. IEEE Rev Biomed Eng 2025; 18:130-151. [PMID: 38265911 DOI: 10.1109/rbme.2024.3357877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Breast cancer has reached the highest incidence rate worldwide among all malignancies since 2020. Breast imaging plays a significant role in early diagnosis and intervention to improve the outcome of breast cancer patients. In the past decade, deep learning has shown remarkable progress in breast cancer imaging analysis, holding great promise in interpreting the rich information and complex context of breast imaging modalities. Considering the rapid improvement in deep learning technology and the increasing severity of breast cancer, it is critical to summarize past progress and identify future challenges to be addressed. This paper provides an extensive review of deep learning-based breast cancer imaging research, covering studies on mammograms, ultrasound, magnetic resonance imaging, and digital pathology images over the past decade. The major deep learning methods and applications on imaging-based screening, diagnosis, treatment response prediction, and prognosis are elaborated and discussed. Drawn from the findings of this survey, we present a comprehensive discussion of the challenges and potential avenues for future research in deep learning-based breast cancer imaging.
Collapse
|
3
|
Heydarlou H, Hodson LJ, Dorraki M, Hickey TE, Tilley WD, Smith E, Ingman WV, Farajpour A. A Deep Learning Approach for the Classification of Fibroglandular Breast Density in Histology Images of Human Breast Tissue. Cancers (Basel) 2025; 17:449. [PMID: 39941816 PMCID: PMC11816254 DOI: 10.3390/cancers17030449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2024] [Revised: 01/21/2025] [Accepted: 01/24/2025] [Indexed: 02/16/2025] Open
Abstract
BACKGROUND To progress research into the biological mechanisms that link mammographic breast density to breast cancer risk, fibroglandular breast density can be used as a surrogate measure. This study aimed to develop a computational tool to classify fibroglandular breast density in hematoxylin and eosin (H&E)-stained breast tissue sections using deep learning approaches that would assist future mammographic density research. METHODS Four different architectural configurations of transferred MobileNet-v2 convolutional neural networks (CNNs) and four different models of vision transformers were developed and trained on a database of H&E-stained normal human breast tissue sections (965 tissue blocks from 93 patients) that had been manually classified into one of five fibroglandular density classes, with class 1 being very low fibroglandular density and class 5 being very high fibroglandular density. RESULTS The MobileNet-Arc 1 and ViT model 1 achieved the highest overall F1 scores of 0.93 and 0.94, respectively. Both models exhibited the lowest false positive rate and highest true positive rate in class 5, while the most challenging classification was class 3, where images from classes 2 and 4 were mistakenly classified as class 3. The area under the curves (AUCs) for all classes were higher than 0.98. CONCLUSIONS Both the ViT and MobileNet models showed promising performance in the accurate classification of H&E-stained tissue sections across all five fibroglandular density classes, providing a rapid and easy-to-use computational tool for breast density analysis.
Collapse
Affiliation(s)
- Hanieh Heydarlou
- Discipline of Surgical Specialties, Adelaide Medical School, University of Adelaide, The Queen Elizabeth Hospital, Woodville South, SA 5011, Australia; (H.H.); (L.J.H.); (W.V.I.)
- Robinson Research Institute, University of Adelaide, Adelaide, SA 5006, Australia
| | - Leigh J. Hodson
- Discipline of Surgical Specialties, Adelaide Medical School, University of Adelaide, The Queen Elizabeth Hospital, Woodville South, SA 5011, Australia; (H.H.); (L.J.H.); (W.V.I.)
- Robinson Research Institute, University of Adelaide, Adelaide, SA 5006, Australia
| | - Mohsen Dorraki
- School of Computer and Mathematical Sciences, The University of Adelaide, Adelaide, SA 5005, Australia;
- Australian Institute for Machine Learning (AIML), Adelaide, SA 5000, Australia
| | - Theresa E. Hickey
- Dame Roma Mitchell Cancer Research Laboratories, Adelaide Medical School, University of Adelaide, Adelaide, SA 5005, Australia; (T.E.H.); (W.D.T.)
| | - Wayne D. Tilley
- Dame Roma Mitchell Cancer Research Laboratories, Adelaide Medical School, University of Adelaide, Adelaide, SA 5005, Australia; (T.E.H.); (W.D.T.)
| | - Eric Smith
- Medical Oncology, Basil Hetzel Institute, The Queen Elizabeth Hospital, Woodville South, SA 5011, Australia;
| | - Wendy V. Ingman
- Discipline of Surgical Specialties, Adelaide Medical School, University of Adelaide, The Queen Elizabeth Hospital, Woodville South, SA 5011, Australia; (H.H.); (L.J.H.); (W.V.I.)
- Robinson Research Institute, University of Adelaide, Adelaide, SA 5006, Australia
| | - Ali Farajpour
- Discipline of Surgical Specialties, Adelaide Medical School, University of Adelaide, The Queen Elizabeth Hospital, Woodville South, SA 5011, Australia; (H.H.); (L.J.H.); (W.V.I.)
- Robinson Research Institute, University of Adelaide, Adelaide, SA 5006, Australia
| |
Collapse
|
4
|
Uwimana A, Gnecco G, Riccaboni M. Artificial intelligence for breast cancer detection and its health technology assessment: A scoping review. Comput Biol Med 2025; 184:109391. [PMID: 39579663 DOI: 10.1016/j.compbiomed.2024.109391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 10/01/2024] [Accepted: 11/07/2024] [Indexed: 11/25/2024]
Abstract
BACKGROUND Recent healthcare advancements highlight the potential of Artificial Intelligence (AI) - and especially, among its subfields, Machine Learning (ML) - in enhancing Breast Cancer (BC) clinical care, leading to improved patient outcomes and increased radiologists' efficiency. While medical imaging techniques have significantly contributed to BC detection and diagnosis, their synergy with AI algorithms has consistently demonstrated superior diagnostic accuracy, reduced False Positives (FPs), and enabled personalized treatment strategies. Despite the burgeoning enthusiasm for leveraging AI for early and effective BC clinical care, its widespread integration into clinical practice is yet to be realized, and the evaluation of AI-based health technologies in terms of health and economic outcomes remains an ongoing endeavor. OBJECTIVES This scoping review aims to investigate AI (and especially ML) applications that have been implemented and evaluated across diverse clinical tasks or decisions in breast imaging and to explore the current state of evidence concerning the assessment of AI-based technologies for BC clinical care within the context of Health Technology Assessment (HTA). METHODS We conducted a systematic literature search following the Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P) checklist in PubMed and Scopus to identify relevant studies on AI (and particularly ML) applications in BC detection and diagnosis. We limited our search to studies published from January 2015 to October 2023. The Minimum Information about CLinical Artificial Intelligence Modeling (MI-CLAIM) checklist was used to assess the quality of AI algorithms development, evaluation, and reporting quality in the reviewed articles. The HTA Core Model® was also used to analyze the comprehensiveness, robustness, and reliability of the reported results and evidence in AI-systems' evaluations to ensure rigorous assessment of AI systems' utility and cost-effectiveness in clinical practice. RESULTS Of the 1652 initially identified articles, 104 were deemed eligible for inclusion in the review. Most studies examined the clinical effectiveness of AI-based systems (78.84%, n= 82), with one study focusing on safety in clinical settings, and 13.46% (n=14) focusing on patients' benefits. Of the studies, 31.73% (n=33) were ethically approved to be carried out in clinical practice, whereas 25% (n=26) evaluated AI systems legally approved for clinical use. Notably, none of the studies addressed the organizational implications of AI systems in clinical practice. Of the 104 studies, only two of them focused on cost-effectiveness analysis, and were analyzed separately. The average percentage scores for the first 102 AI-based studies' quality assessment based on the MI-CLAIM checklist criteria were 84.12%, 83.92%, 83.98%, 74.51%, and 14.7% for study design, data and optimization, model performance, model examination, and reproducibility, respectively. Notably, 20.59% (n=21) of these studies relied on large-scale representative real-world breast screening datasets, with only 10.78% (n =11) studies demonstrating the robustness and generalizability of the evaluated AI systems. CONCLUSION In bridging the gap between cutting-edge developments and seamless integration of AI systems into clinical workflows, persistent challenges encompass data quality and availability, ethical and legal considerations, robustness and trustworthiness, scalability, and alignment with existing radiologists' workflow. These hurdles impede the synthesis of comprehensive, robust, and reliable evidence to substantiate these systems' clinical utility, relevance, and cost-effectiveness in real-world clinical workflows. Consequently, evaluating AI-based health technologies through established HTA methodologies becomes complicated. We also highlight potential significant influences on AI systems' effectiveness of various factors, such as operational dynamics, organizational structure, the application context of AI systems, and practices in breast screening or examination reading of AI support tools in radiology. Furthermore, we emphasize substantial reciprocal influences on decision-making processes between AI systems and radiologists. Thus, we advocate for an adapted assessment framework specifically designed to address these potential influences on AI systems' effectiveness, mainly addressing system-level transformative implications for AI systems rather than focusing solely on technical performance and task-level evaluations.
Collapse
Affiliation(s)
| | | | - Massimo Riccaboni
- IMT School for Advanced Studies, Lucca, Italy; IUSS University School for Advanced Studies, Pavia, Italy.
| |
Collapse
|
5
|
Zhang H, Liang H, Wenjia G, Jing M, Gang S, Hongbing M. ACL-DUNet: A tumor segmentation method based on multiple attention and densely connected breast ultrasound images. PLoS One 2024; 19:e0307916. [PMID: 39485757 PMCID: PMC11530038 DOI: 10.1371/journal.pone.0307916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 07/13/2024] [Indexed: 11/03/2024] Open
Abstract
Breast cancer is the most common cancer in women. Breast masses are one of the distinctive signs for diagnosing breast cancer, and ultrasound is widely used for screening as a non-invasive and effective method for breast examination. In this study, we used the Mendeley and BUSI datasets, comprising 250 images (100 benign, 150 malignant) and 780 images (133 normal, 487 benign, 210 malignant), respectively. The datasets were split into 80% for training and 20% for validation. The accurate measurement and characterization of different breast tumors play a crucial role in guiding clinical decision-making. The area and shape of the different breast tumors detected are critical for clinicians to make accurate diagnostic decisions. In this study, a deep learning method for mass segmentation in breast ultrasound images is proposed, which uses densely connected U-net with attention gates (AGs) as well as channel attention modules and scale attention modules for accurate breast tumor segmentation.The densely connected network is employed in the encoding stage to enhance the network's feature extraction capabilities. Three attention modules are integrated in the decoding stage to better capture the most relevant features. After validation on the Mendeley and BUSI datasets, the experimental results demonstrate that our method achieves a Dice Similarity Coefficient (DSC) of 0.8764 and 0.8313, respectively, outperforming other deep learning approaches. The source code is located at github.com/zhanghaoCV/plos-one.
Collapse
Affiliation(s)
- Hao Zhang
- School of Computer Science and Technology, Xinjiang University, Urumqi, Xinjiang, China
| | - He Liang
- Department of Electronic Engineering, and Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Guo Wenjia
- Cancer Institute, Affiliated Cancer Hospital of Xinjiang Medical University, Urumqi, Xinjiang, China
| | - Ma Jing
- School of Computer Science and Technology, Xinjiang University, Urumqi, Xinjiang, China
| | - Sun Gang
- Department of Breast and Thyroid Surgery, The Affiliated Cancer Hospital of Xinjiang Medical University, Urumqi, Xinjiang, P.R. China
- Xinjiang Cancer Center/Key Laboratory of Oncology of Xinjiang Uyghur Autonomous Region, Urumqi, Xinjiang, P.R. China
| | - Ma Hongbing
- Department of Electronic Engineering, and Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| |
Collapse
|
6
|
Faur IF, Dobrescu A, Clim IA, Pasca P, Prodan-Barbulescu C, Tarta C, Neamtu C, Isaic A, Brebu D, Braicu V, Feier CVI, Duta C, Totolici B. Sentinel Lymph Node Biopsy in Breast Cancer Using Different Types of Tracers According to Molecular Subtypes and Breast Density-A Randomized Clinical Study. Diagnostics (Basel) 2024; 14:2439. [PMID: 39518406 PMCID: PMC11545725 DOI: 10.3390/diagnostics14212439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2024] [Revised: 10/18/2024] [Accepted: 10/23/2024] [Indexed: 11/16/2024] Open
Abstract
Background: Sentinel lymph node biopsy (SLNB) has become a method more and more frequently used in loco-regional breast cancer in the initial stages. Starting from the first report on the technical feasibility of the sentinel node method in breast cancer, published by Krag (1993) and Giuliano (1994), the method underwent numerous improvements and was also largely used worldwide. Methods: This article is a prospective study that took place at the "SJUPBT Surgery Clinic Timisoara" over a period of 1 year between July 2023 and July 2024, during which 137 underwent sentinel lymph node biopsy (SLNB) based on the current guidelines. For the identification of sentinel lymph nodes, we used various methods, including single traces and also a dual tracer and triple tracer. Results: Breast density represents a predictive biomarker for the identification rate of a sentinel node, being directly correlated with BMI (above 30 kg/m2) and with an age of above 50 years. The classification of the patients according to breast density represents an important criterion given that an adipose breast density (Tabar-Gram I-II) represents a lower IR of SLN compared with a density of the fibro-nodular type (Tabar-Gram III-V). We did not obtain any statistically significant data for the linear correlations between IR and the molecular profile, whether referring to the luminal subtypes (Luminal A and Luminal B) or to the non-luminal ones (HER2+ and TNBC), with p > 0.05, 0.201 [0.88, 0.167]; z = 1.82.
Collapse
Affiliation(s)
- Ionut Flaviu Faur
- II Surgery Clinic, Timisoara Emergency County Hospital, 300723 Timisoara, Romania; (A.D.); (P.P.); (C.P.-B.); (C.T.); (A.I.); (D.B.); (V.B.); (C.D.)
- X Department of General Surgery, “Victor Babes” University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania;
- Multidisciplinary Doctoral School “Vasile Goldiș”, Western University of Arad, 310025 Arad, Romania
| | - Amadeus Dobrescu
- II Surgery Clinic, Timisoara Emergency County Hospital, 300723 Timisoara, Romania; (A.D.); (P.P.); (C.P.-B.); (C.T.); (A.I.); (D.B.); (V.B.); (C.D.)
- X Department of General Surgery, “Victor Babes” University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania;
| | - Ioana Adelina Clim
- II Obstetrics and Gynecology Clinic “Dominic Stanca”, 400124 Cluj-Napoca, Romania;
| | - Paul Pasca
- II Surgery Clinic, Timisoara Emergency County Hospital, 300723 Timisoara, Romania; (A.D.); (P.P.); (C.P.-B.); (C.T.); (A.I.); (D.B.); (V.B.); (C.D.)
- X Department of General Surgery, “Victor Babes” University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania;
| | - Catalin Prodan-Barbulescu
- II Surgery Clinic, Timisoara Emergency County Hospital, 300723 Timisoara, Romania; (A.D.); (P.P.); (C.P.-B.); (C.T.); (A.I.); (D.B.); (V.B.); (C.D.)
- Department I, Discipline of Anatomy and Embriology, “Victor Babes” University of Medicine and Pharmacy, 300041 Timisoara, Romania
- Doctoral School, “Victor Babes” University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square 2, 300041 Timisoara, Romania
| | - Cristi Tarta
- II Surgery Clinic, Timisoara Emergency County Hospital, 300723 Timisoara, Romania; (A.D.); (P.P.); (C.P.-B.); (C.T.); (A.I.); (D.B.); (V.B.); (C.D.)
- X Department of General Surgery, “Victor Babes” University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania;
| | - Carmen Neamtu
- Faculty of Dentistry, “Vasile Goldiș” Western University of Arad, 310025 Arad, Romania;
- I Clinic of General Surgery, Arad County Emergency Clinical Hospital, 310158 Arad, Romania;
| | - Alexandru Isaic
- II Surgery Clinic, Timisoara Emergency County Hospital, 300723 Timisoara, Romania; (A.D.); (P.P.); (C.P.-B.); (C.T.); (A.I.); (D.B.); (V.B.); (C.D.)
- X Department of General Surgery, “Victor Babes” University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania;
| | - Dan Brebu
- II Surgery Clinic, Timisoara Emergency County Hospital, 300723 Timisoara, Romania; (A.D.); (P.P.); (C.P.-B.); (C.T.); (A.I.); (D.B.); (V.B.); (C.D.)
- X Department of General Surgery, “Victor Babes” University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania;
| | - Vlad Braicu
- II Surgery Clinic, Timisoara Emergency County Hospital, 300723 Timisoara, Romania; (A.D.); (P.P.); (C.P.-B.); (C.T.); (A.I.); (D.B.); (V.B.); (C.D.)
- X Department of General Surgery, “Victor Babes” University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania;
| | - Catalin Vladut Ionut Feier
- X Department of General Surgery, “Victor Babes” University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania;
- First Surgery Clinic, “Pius Brinzeu” Clinical Emergency Hospital, 300723 Timisoara, Romania
| | - Ciprian Duta
- II Surgery Clinic, Timisoara Emergency County Hospital, 300723 Timisoara, Romania; (A.D.); (P.P.); (C.P.-B.); (C.T.); (A.I.); (D.B.); (V.B.); (C.D.)
- X Department of General Surgery, “Victor Babes” University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania;
| | - Bogdan Totolici
- I Clinic of General Surgery, Arad County Emergency Clinical Hospital, 310158 Arad, Romania;
- Department of General Surgery, Faculty of Medicine, “Vasile Goldiș” Western University of Arad, 310025 Arad, Romania
| |
Collapse
|
7
|
Kaiser AV, Zanolin-Purin D, Chuck N, Enaux J, Wruk D. Assessment of the Breast Density Prevalence in Swiss Women with a Deep Convolutional Neural Network: A Cross-Sectional Study. Diagnostics (Basel) 2024; 14:2212. [PMID: 39410616 PMCID: PMC11476330 DOI: 10.3390/diagnostics14192212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 09/25/2024] [Accepted: 09/29/2024] [Indexed: 10/20/2024] Open
Abstract
Background/Objectives: High breast density is a risk factor for breast cancer and can reduce the sensitivity of mammography. Given the influence of breast density on patient risk stratification and screening accuracy, it is crucial to monitor the prevalence of extremely dense breasts within local populations. Moreover, there is a lack of comprehensive understanding regarding breast density prevalence in Switzerland. Therefore, this study aimed to determine the prevalence of breast density in a selected Swiss population. Methods: To overcome the potential variability in breast density classifications by human readers, this study utilized commercially available deep convolutional neural network breast classification software. A retrospective analysis of mammographic images of women aged 40 years and older was performed. Results: A total of 4698 mammograms from women (58 ± 11 years) were included in this study. The highest prevalence of breast density was in category C (heterogeneously dense), which was observed in 41.5% of the cases. This was followed by category B (scattered areas of fibroglandular tissue), which accounted for 22.5%. Conclusions: Notably, extremely dense breasts (category D) were significantly more common in younger women, with a prevalence of 34%. However, this rate dropped sharply to less than 10% in women over 55 years of age.
Collapse
Affiliation(s)
- Adergicia V. Kaiser
- Faculty of Medical Sciences, Private University in the Principality of Liechtenstein (UFL), 9495 Triesen, Liechtenstein; (D.Z.-P.); (J.E.)
| | - Daniela Zanolin-Purin
- Faculty of Medical Sciences, Private University in the Principality of Liechtenstein (UFL), 9495 Triesen, Liechtenstein; (D.Z.-P.); (J.E.)
| | - Natalie Chuck
- St. Gallen Radiology Network, Cantonal Hospital of St. Gallen, 9007 St. Gallen, Switzerland (D.W.)
- St. Gallen Radiology Network, Grabs Hospital, 9472 Grabs, Switzerland
| | - Jennifer Enaux
- Faculty of Medical Sciences, Private University in the Principality of Liechtenstein (UFL), 9495 Triesen, Liechtenstein; (D.Z.-P.); (J.E.)
| | - Daniela Wruk
- St. Gallen Radiology Network, Cantonal Hospital of St. Gallen, 9007 St. Gallen, Switzerland (D.W.)
- St. Gallen Radiology Network, Grabs Hospital, 9472 Grabs, Switzerland
| |
Collapse
|
8
|
Mathur A, Arya N, Pasupa K, Saha S, Roy Dey S, Saha S. Breast cancer prognosis through the use of multi-modal classifiers: current state of the art and the way forward. Brief Funct Genomics 2024; 23:561-569. [PMID: 38688724 DOI: 10.1093/bfgp/elae015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 03/01/2024] [Accepted: 04/09/2024] [Indexed: 05/02/2024] Open
Abstract
We present a survey of the current state-of-the-art in breast cancer detection and prognosis. We analyze the evolution of Artificial Intelligence-based approaches from using just uni-modal information to multi-modality for detection and how such paradigm shift facilitates the efficacy of detection, consistent with clinical observations. We conclude that interpretable AI-based predictions and ability to handle class imbalance should be considered priority.
Collapse
Affiliation(s)
- Archana Mathur
- Department of Information Science and Engineering, Nitte Meenakshi Institute of Technology, Yelahanka, 560064, Karnataka, India
| | - Nikhilanand Arya
- School of Computer Engineering, Kalinga Institute of Industrial Technology, Deemed to be University, Bhubaneshwar, 751024, Odisha, India
| | - Kitsuchart Pasupa
- School of Information Technology, King Mongkut's Institute of Technology Ladkrabang, 1 Soi Chalongkrung 1, 10520, Bangkok, Thailand
| | - Sriparna Saha
- Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801106, Bihar, India
| | - Sudeepa Roy Dey
- Department of Computer Science and Engineering, PES University, Hosur Road, 560100, Karnataka, India
| | - Snehanshu Saha
- CSIS and APPCAIR, BITS Pilani K.K Birla Goa Campus, Goa, 403726, Goa, India
- Div of AI Research, HappyMonk AI, Bangalore, 560078, Karnataka, India
| |
Collapse
|
9
|
Shao Z, Cai Y, Hao Y, Hu C, Yu Z, Shen Y, Gao F, Zhang F, Ma W, Zhou Q, Chen J, Lu H. AI-based strategies in breast mass ≤ 2 cm classification with mammography and tomosynthesis. Breast 2024; 78:103805. [PMID: 39321503 PMCID: PMC11462177 DOI: 10.1016/j.breast.2024.103805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 08/26/2024] [Accepted: 09/04/2024] [Indexed: 09/27/2024] Open
Abstract
PURPOSE To evaluate the diagnosis performance of digital mammography (DM) and digital breast tomosynthesis (DBT), DM combined DBT with AI-based strategies for breast mass ≤ 2 cm. MATERIALS AND METHODS DM and DBT images in 483 patients including 512 breast masses were acquired from November 2018 to November 2019. Malignant and benign tumours were determined by biopsies using histological analysis and follow-up within 24 months. The radiomics and deep learning methods were employed to extract the breast mass features in images and finally for benign and malignant classification. The DM, DBT and DM combined DBT (DM + DBT) images were fed into radiomics and deep learning models to construct corresponding models, respectively. The area under the receiver operating characteristic curve (AUC) was employed to estimate model performance. An external dataset of 146 patients from March 2021 to December 2022 from another center was enrolled for external validation. RESULTS In the internal testing dataset, compared with the DM model and the DBT model, the DM + DBT models based on radiomics and deep learning both showed statistically significant higher AUCs [0.810 (RA-DM), 0.823 (RA-DBT) and 0.869 (RA-DM + DBT), P ≤ 0.001; 0.867 (DL-DM), 0.871 (DL-DBT) and 0.908 (DL-DM + DBT), P = 0.001]. The deep learning models present superior to the radiomics models in the experiments with only DM (0.867 vs 0.810, P = 0.001), only DBT (0.871 vs 0.823, P = 0.001) and DM + DBT (0.908 vs 0.869, P = 0.003). CONCLUSIONS DBT has a clear additional value for diagnosing breast mass less than 2 cm compared with only DM. AI-based methods, especially deep learning, can help achieve excellent performance.
Collapse
Affiliation(s)
- Zhenzhen Shao
- Department of Breast Imaging, Tianjin Medical University Cancer Institute & Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Key Laboratory of Cancer Prevention and Therapy, Tianjin, PR China.
| | - Yuxin Cai
- Department of Breast Imaging, Tianjin Medical University Cancer Institute & Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Key Laboratory of Cancer Prevention and Therapy, Tianjin, PR China.
| | - Yujuan Hao
- Department of Breast Imaging, Tianjin Medical University Cancer Institute & Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Key Laboratory of Cancer Prevention and Therapy, Tianjin, PR China.
| | - Congyi Hu
- Department of Breast Imaging, Tianjin Medical University Cancer Institute & Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Key Laboratory of Cancer Prevention and Therapy, Tianjin, PR China.
| | - Ziling Yu
- Department of Breast Imaging, Tianjin Medical University Cancer Institute & Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Key Laboratory of Cancer Prevention and Therapy, Tianjin, PR China.
| | - Yue Shen
- Department of Breast Imaging, Tianjin Medical University Cancer Institute & Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Key Laboratory of Cancer Prevention and Therapy, Tianjin, PR China.
| | - Fei Gao
- School of Computer Science, Peking University, Beijing, PR China.
| | | | - Wenjuan Ma
- Department of Breast Imaging, Tianjin Medical University Cancer Institute & Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Key Laboratory of Cancer Prevention and Therapy, Tianjin, PR China.
| | - Qian Zhou
- Department of Breast imaging, The affiliated Hospital of Qingdao University, Qingdao, PR China.
| | - Jingjing Chen
- Department of Breast imaging, The affiliated Hospital of Qingdao University, Qingdao, PR China.
| | - Hong Lu
- Department of Breast Imaging, Tianjin Medical University Cancer Institute & Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Breast Cancer Prevention and Therapy, Tianjin Medical University, Ministry of Education, Key Laboratory of Cancer Prevention and Therapy, Tianjin, PR China.
| |
Collapse
|
10
|
Biroš M, Kvak D, Dandár J, Hrubý R, Janů E, Atakhanova A, Al-antari MA. Enhancing Accuracy in Breast Density Assessment Using Deep Learning: A Multicentric, Multi-Reader Study. Diagnostics (Basel) 2024; 14:1117. [PMID: 38893643 PMCID: PMC11172127 DOI: 10.3390/diagnostics14111117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 05/20/2024] [Accepted: 05/24/2024] [Indexed: 06/21/2024] Open
Abstract
The evaluation of mammographic breast density, a critical indicator of breast cancer risk, is traditionally performed by radiologists via visual inspection of mammography images, utilizing the Breast Imaging-Reporting and Data System (BI-RADS) breast density categories. However, this method is subject to substantial interobserver variability, leading to inconsistencies and potential inaccuracies in density assessment and subsequent risk estimations. To address this, we present a deep learning-based automatic detection algorithm (DLAD) designed for the automated evaluation of breast density. Our multicentric, multi-reader study leverages a diverse dataset of 122 full-field digital mammography studies (488 images in CC and MLO projections) sourced from three institutions. We invited two experienced radiologists to conduct a retrospective analysis, establishing a ground truth for 72 mammography studies (BI-RADS class A: 18, BI-RADS class B: 43, BI-RADS class C: 7, BI-RADS class D: 4). The efficacy of the DLAD was then compared to the performance of five independent radiologists with varying levels of experience. The DLAD showed robust performance, achieving an accuracy of 0.819 (95% CI: 0.736-0.903), along with an F1 score of 0.798 (0.594-0.905), precision of 0.806 (0.596-0.896), recall of 0.830 (0.650-0.946), and a Cohen's Kappa (κ) of 0.708 (0.562-0.841). The algorithm achieved robust performance that matches and in four cases exceeds that of individual radiologists. The statistical analysis did not reveal a significant difference in accuracy between DLAD and the radiologists, underscoring the model's competitive diagnostic alignment with professional radiologist assessments. These results demonstrate that the deep learning-based automatic detection algorithm can enhance the accuracy and consistency of breast density assessments, offering a reliable tool for improving breast cancer screening outcomes.
Collapse
Affiliation(s)
- Marek Biroš
- Carebot, Ltd., 128 00 Prague, Czech Republic; (M.B.); (J.D.); (R.H.); (A.A.)
| | - Daniel Kvak
- Carebot, Ltd., 128 00 Prague, Czech Republic; (M.B.); (J.D.); (R.H.); (A.A.)
- Department of Simulation Medicine, Faculty of Medicine, Masaryk University, 625 00 Brno, Czech Republic
| | - Jakub Dandár
- Carebot, Ltd., 128 00 Prague, Czech Republic; (M.B.); (J.D.); (R.H.); (A.A.)
| | - Robert Hrubý
- Carebot, Ltd., 128 00 Prague, Czech Republic; (M.B.); (J.D.); (R.H.); (A.A.)
| | - Eva Janů
- Department of Radiology, Masaryk Memorial Cancer Institute, 602 00 Brno, Czech Republic
| | - Anora Atakhanova
- Carebot, Ltd., 128 00 Prague, Czech Republic; (M.B.); (J.D.); (R.H.); (A.A.)
| | - Mugahed A. Al-antari
- Department of Artificial Intelligence and Data Science, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea;
| |
Collapse
|
11
|
Nissar I, Alam S, Masood S, Kashif M. MOB-CBAM: A dual-channel attention-based deep learning generalizable model for breast cancer molecular subtypes prediction using mammograms. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 248:108121. [PMID: 38531147 DOI: 10.1016/j.cmpb.2024.108121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 02/15/2024] [Accepted: 03/06/2024] [Indexed: 03/28/2024]
Abstract
BACKGROUND AND OBJECTIVE Deep Learning models have emerged as a significant tool in generating efficient solutions for complex problems including cancer detection, as they can analyze large amounts of data with high efficiency and performance. Recent medical studies highlight the significance of molecular subtype detection in breast cancer, aiding the development of personalized treatment plans as different subtypes of cancer respond better to different therapies. METHODS In this work, we propose a novel lightweight dual-channel attention-based deep learning model MOB-CBAM that utilizes the backbone of MobileNet-V3 architecture with a Convolutional Block Attention Module to make highly accurate and precise predictions about breast cancer. We used the CMMD mammogram dataset to evaluate the proposed model in our study. Nine distinct data subsets were created from the original dataset to perform coarse and fine-grained predictions, enabling it to identify masses, calcifications, benign, malignant tumors and molecular subtypes of cancer, including Luminal A, Luminal B, HER-2 Positive, and Triple Negative. The pipeline incorporates several image pre-processing techniques, including filtering, enhancement, and normalization, for enhancing the model's generalization ability. RESULTS While identifying benign versus malignant tumors, i.e., coarse-grained classification, the MOB-CBAM model produced exceptional results with 99 % accuracy, precision, recall, and F1-score values of 0.99 and MCC of 0.98. In terms of fine-grained classification, the MOB-CBAM model has proven to be highly efficient in accurately identifying mass with (benign/malignant) and calcification with (benign/malignant) classification tasks with an impressive accuracy rate of 98 %. We have also cross-validated the efficiency of the proposed MOB-CBAM deep learning architecture on two datasets: MIAS and CBIS-DDSM. On the MIAS dataset, an accuracy of 97 % was reported for the task of classifying benign, malignant, and normal images, while on the CBIS-DDSM dataset, an accuracy of 98 % was achieved for the classification of mass with either benign or malignant, and calcification with benign and malignant tumors. CONCLUSION This study presents lightweight MOB-CBAM, a novel deep learning framework, to address breast cancer diagnosis and subtype prediction. The model's innovative incorporation of the CBAM enhances precise predictions. The extensive evaluation of the CMMD dataset and cross-validation on other datasets affirm the model's efficacy.
Collapse
Affiliation(s)
- Iqra Nissar
- Department of Computer Engineering, Jamia Millia Islamia (A Central University), New Delhi, 110025, India.
| | - Shahzad Alam
- Department of Computer Engineering, Jamia Millia Islamia (A Central University), New Delhi, 110025, India
| | - Sarfaraz Masood
- Department of Computer Engineering, Jamia Millia Islamia (A Central University), New Delhi, 110025, India
| | - Mohammad Kashif
- Department of Computer Engineering, Jamia Millia Islamia (A Central University), New Delhi, 110025, India
| |
Collapse
|
12
|
Palomba G, Fernicola A, Corte MD, Capuano M, De Palma GD, Aprea G. Artificial intelligence in screening and diagnosis of surgical diseases: A narrative review. AIMS Public Health 2024; 11:557-576. [PMID: 39027395 PMCID: PMC11252578 DOI: 10.3934/publichealth.2024028] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/29/2024] [Accepted: 04/02/2024] [Indexed: 07/20/2024] Open
Abstract
Artificial intelligence (AI) is playing an increasing role in several fields of medicine. It is also gaining popularity among surgeons as a valuable screening and diagnostic tool for many conditions such as benign and malignant colorectal, gastric, thyroid, parathyroid, and breast disorders. In the literature, there is no review that groups together the various application domains of AI when it comes to the screening and diagnosis of main surgical diseases. The aim of this review is to describe the use of AI in these settings. We performed a literature review by searching PubMed, Web of Science, Scopus, and Embase for all studies investigating the role of AI in the surgical setting, published between January 01, 2000, and June 30, 2023. Our focus was on randomized controlled trials (RCTs), meta-analysis, systematic reviews, and observational studies, dealing with large cohorts of patients. We then gathered further relevant studies from the reference list of the selected publications. Based on the studies reviewed, it emerges that AI could strongly enhance the screening efficiency, clinical ability, and diagnostic accuracy for several surgical conditions. Some of the future advantages of this technology include implementing, speeding up, and improving the automaticity with which AI recognizes, differentiates, and classifies the various conditions.
Collapse
Affiliation(s)
- Giuseppe Palomba
- Department of Clinical Medicine and Surgery, University of Naples, “Federico II”, Sergio Pansini 5, 80131, Naples, Italy
| | - Agostino Fernicola
- Department of Clinical Medicine and Surgery, University of Naples, “Federico II”, Sergio Pansini 5, 80131, Naples, Italy
| | - Marcello Della Corte
- Azienda Ospedaliera Universitaria San Giovanni di Dio e Ruggi d'Aragona - OO. RR. Scuola Medica Salernitana, Salerno, Italy
| | - Marianna Capuano
- Department of Clinical Medicine and Surgery, University of Naples, “Federico II”, Sergio Pansini 5, 80131, Naples, Italy
| | - Giovanni Domenico De Palma
- Department of Clinical Medicine and Surgery, University of Naples, “Federico II”, Sergio Pansini 5, 80131, Naples, Italy
| | - Giovanni Aprea
- Department of Clinical Medicine and Surgery, University of Naples, “Federico II”, Sergio Pansini 5, 80131, Naples, Italy
| |
Collapse
|
13
|
Zhong Y, Piao Y, Tan B, Liu J. A multi-task fusion model based on a residual-Multi-layer perceptron network for mammographic breast cancer screening. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 247:108101. [PMID: 38432087 DOI: 10.1016/j.cmpb.2024.108101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 01/13/2024] [Accepted: 02/23/2024] [Indexed: 03/05/2024]
Abstract
BACKGROUND AND OBJECTIVE Deep learning approaches are being increasingly applied for medical computer-aided diagnosis (CAD). However, these methods generally target only specific image-processing tasks, such as lesion segmentation or benign state prediction. For the breast cancer screening task, single feature extraction models are generally used, which directly extract only those potential features from the input mammogram that are relevant to the target task. This can lead to the neglect of other important morphological features of the lesion as well as other auxiliary information from the internal breast tissue. To obtain more comprehensive and objective diagnostic results, in this study, we developed a multi-task fusion model that combines multiple specific tasks for CAD of mammograms. METHODS We first trained a set of separate, task-specific models, including a density classification model, a mass segmentation model, and a lesion benignity-malignancy classification model, and then developed a multi-task fusion model that incorporates all of the mammographic features from these different tasks to yield comprehensive and refined prediction results for breast cancer diagnosis. RESULTS The experimental results showed that our proposed multi-task fusion model outperformed other related state-of-the-art models in both breast cancer screening tasks in the publicly available datasets CBIS-DDSM and INbreast, achieving a competitive screening performance with area-under-the-curve scores of 0.92 and 0.95, respectively. CONCLUSIONS Our model not only allows an overall assessment of lesion types in mammography but also provides intermediate results related to radiological features and potential cancer risk factors, indicating its potential to offer comprehensive workflow support to radiologists.
Collapse
Affiliation(s)
- Yutong Zhong
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun 130022, PR China
| | - Yan Piao
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun 130022, PR China.
| | - Baolin Tan
- Technology Co. LTD, Shenzhen 518000, PR China
| | - Jingxin Liu
- Department of Radiology, China-Japan Union Hospital, Jilin University, Changchun 130033, PR China
| |
Collapse
|
14
|
Lei M, Zhao J, Zhou J, Lee H, Wu Q, Burns Z, Chen G, Liu Z. Super resolution label-free dark-field microscopy by deep learning. NANOSCALE 2024; 16:4703-4709. [PMID: 38268454 DOI: 10.1039/d3nr04294d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Dark-field microscopy (DFM) is a powerful label-free and high-contrast imaging technique due to its ability to reveal features of transparent specimens with inhomogeneities. However, owing to the Abbe's diffraction limit, fine structures at sub-wavelength scale are difficult to resolve. In this work, we report a single image super resolution DFM scheme using a convolutional neural network (CNN). A U-net based CNN is trained with a dataset which is numerically simulated based on the forward physical model of the DFM. The forward physical model described by the parameters of the imaging setup connects the object ground truths and dark field images. With the trained network, we demonstrate super resolution dark field imaging of various test samples with twice resolution improvement. Our technique illustrates a promising deep learning approach to double the resolution of DFM without any hardware modification.
Collapse
Affiliation(s)
- Ming Lei
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Junxiang Zhao
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Junxiao Zhou
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Hongki Lee
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Qianyi Wu
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Zachary Burns
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Guanghao Chen
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
| | - Zhaowei Liu
- Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California, 92093, USA.
- Materials Science and Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| |
Collapse
|
15
|
Khara G, Trivedi H, Newell MS, Patel R, Rijken T, Kecskemethy P, Glocker B. Generalisable deep learning method for mammographic density prediction across imaging techniques and self-reported race. COMMUNICATIONS MEDICINE 2024; 4:21. [PMID: 38374436 PMCID: PMC10876691 DOI: 10.1038/s43856-024-00446-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 01/31/2024] [Indexed: 02/21/2024] Open
Abstract
BACKGROUND Breast density is an important risk factor for breast cancer complemented by a higher risk of cancers being missed during screening of dense breasts due to reduced sensitivity of mammography. Automated, deep learning-based prediction of breast density could provide subject-specific risk assessment and flag difficult cases during screening. However, there is a lack of evidence for generalisability across imaging techniques and, importantly, across race. METHODS This study used a large, racially diverse dataset with 69,697 mammographic studies comprising 451,642 individual images from 23,057 female participants. A deep learning model was developed for four-class BI-RADS density prediction. A comprehensive performance evaluation assessed the generalisability across two imaging techniques, full-field digital mammography (FFDM) and two-dimensional synthetic (2DS) mammography. A detailed subgroup performance and bias analysis assessed the generalisability across participants' race. RESULTS Here we show that a model trained on FFDM-only achieves a 4-class BI-RADS classification accuracy of 80.5% (79.7-81.4) on FFDM and 79.4% (78.5-80.2) on unseen 2DS data. When trained on both FFDM and 2DS images, the performance increases to 82.3% (81.4-83.0) and 82.3% (81.3-83.1). Racial subgroup analysis shows unbiased performance across Black, White, and Asian participants, despite a separate analysis confirming that race can be predicted from the images with a high accuracy of 86.7% (86.0-87.4). CONCLUSIONS Deep learning-based breast density prediction generalises across imaging techniques and race. No substantial disparities are found for any subgroup, including races that were never seen during model development, suggesting that density predictions are unbiased.
Collapse
Affiliation(s)
| | - Hari Trivedi
- Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Mary S Newell
- Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Ravi Patel
- Kheiron Medical Technologies, London, UK
| | | | | | - Ben Glocker
- Kheiron Medical Technologies, London, UK.
- Department of Computing, Imperial College London, London, UK.
| |
Collapse
|
16
|
Patra A, Behera SK, Sethy PK, Barpanda NK. Breast mass density categorisation using deep transferred EfficientNet with support vector machines. MULTIMEDIA TOOLS AND APPLICATIONS 2024; 83:74883-74896. [DOI: 10.1007/s11042-024-18507-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 10/17/2023] [Accepted: 01/29/2024] [Indexed: 01/11/2025]
|
17
|
Wang L. Mammography with deep learning for breast cancer detection. Front Oncol 2024; 14:1281922. [PMID: 38410114 PMCID: PMC10894909 DOI: 10.3389/fonc.2024.1281922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/19/2024] [Indexed: 02/28/2024] Open
Abstract
X-ray mammography is currently considered the golden standard method for breast cancer screening, however, it has limitations in terms of sensitivity and specificity. With the rapid advancements in deep learning techniques, it is possible to customize mammography for each patient, providing more accurate information for risk assessment, prognosis, and treatment planning. This paper aims to study the recent achievements of deep learning-based mammography for breast cancer detection and classification. This review paper highlights the potential of deep learning-assisted X-ray mammography in improving the accuracy of breast cancer screening. While the potential benefits are clear, it is essential to address the challenges associated with implementing this technology in clinical settings. Future research should focus on refining deep learning algorithms, ensuring data privacy, improving model interpretability, and establishing generalizability to successfully integrate deep learning-assisted mammography into routine breast cancer screening programs. It is hoped that the research findings will assist investigators, engineers, and clinicians in developing more effective breast imaging tools that provide accurate diagnosis, sensitivity, and specificity for breast cancer.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| |
Collapse
|
18
|
Aliniya P, Nicolescu M, Nicolescu M, Bebis G. Improved Loss Function for Mass Segmentation in Mammography Images Using Density and Mass Size. J Imaging 2024; 10:20. [PMID: 38249005 PMCID: PMC10816853 DOI: 10.3390/jimaging10010020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 12/31/2023] [Accepted: 01/04/2024] [Indexed: 01/23/2024] Open
Abstract
Mass segmentation is one of the fundamental tasks used when identifying breast cancer due to the comprehensive information it provides, including the location, size, and border of the masses. Despite significant improvement in the performance of the task, certain properties of the data, such as pixel class imbalance and the diverse appearance and sizes of masses, remain challenging. Recently, there has been a surge in articles proposing to address pixel class imbalance through the formulation of the loss function. While demonstrating an enhancement in performance, they mostly fail to address the problem comprehensively. In this paper, we propose a new perspective on the calculation of the loss that enables the binary segmentation loss to incorporate the sample-level information and region-level losses in a hybrid loss setting. We propose two variations of the loss to include mass size and density in the loss calculation. Also, we introduce a single loss variant using the idea of utilizing mass size and density to enhance focal loss. We tested the proposed method on benchmark datasets: CBIS-DDSM and INbreast. Our approach outperformed the baseline and state-of-the-art methods on both datasets.
Collapse
Affiliation(s)
- Parvaneh Aliniya
- Computer Science and Engineering Department, College of Engineering, University of Nevada, Reno, 89557 NV, USA; (M.N.); (G.B.)
| | - Mircea Nicolescu
- Computer Science and Engineering Department, College of Engineering, University of Nevada, Reno, 89557 NV, USA; (M.N.); (G.B.)
| | | | | |
Collapse
|
19
|
Park HL, Ziogas A, Feig SA, Kirmizi RL, Lee CJ, Alvarez A, Lucia RM, Goodman D, Larsen KM, Kelly R, Anton-Culver H. Factors Associated with Longitudinal Changes in Mammographic Density in a Multiethnic Breast Screening Cohort of Postmenopausal Women. Breast J 2023; 2023:2794603. [PMID: 37881237 PMCID: PMC10597735 DOI: 10.1155/2023/2794603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 07/19/2023] [Accepted: 10/04/2023] [Indexed: 10/27/2023]
Abstract
Background Breast density is an important risk factor for breast cancer and is known to be associated with characteristics such as age, race, and hormone levels; however, it is unclear what factors contribute to changes in breast density in postmenopausal women over time. Understanding factors associated with density changes may enable a better understanding of breast cancer risk and facilitate potential strategies for prevention. Methods This study investigated potential associations between personal factors and changes in mammographic density in a cohort of 3,392 postmenopausal women with no personal history of breast cancer between 2011 and 2017. Self-reported information on demographics, breast and reproductive history, and lifestyle factors, including body mass index (BMI), alcohol intake, smoking, and physical activity, was collected by an electronic intake form, and breast imaging reporting and database system (BI-RADS) mammographic density scores were obtained from electronic medical records. Factors associated with a longitudinal increase or decrease in mammographic density were identified using Fisher's exact test and multivariate conditional logistic regression. Results 7.9% of women exhibited a longitudinal decrease in mammographic density, 6.7% exhibited an increase, and 85.4% exhibited no change. Longitudinal changes in mammographic density were correlated with age, race/ethnicity, and age at menopause in the univariate analysis. In the multivariate analysis, Asian women were more likely to exhibit a longitudinal increase in mammographic density and less likely to exhibit a decrease compared to White women. On the other hand, obese women were less likely to exhibit an increase and more likely to exhibit a decrease compared to normal weight women. Women who underwent menopause at age 55 years or older were less likely to exhibit a decrease in mammographic density compared to women who underwent menopause at a younger age. Besides obesity, lifestyle factors (alcohol intake, smoking, and physical activity) were not associated with longitudinal changes in mammographic density. Conclusions The associations we observed between Asian race/obesity and longitudinal changes in BI-RADS density in postmenopausal women are paradoxical in that breast cancer risk is lower in Asian women and higher in obese women. However, the association between later age at menopause and a decreased likelihood of decreasing in BI-RADS density over time is consistent with later age at menopause being a risk factor for breast cancer and suggests a potential relationship between greater cumulative lifetime estrogen exposure and relative stability in breast density after menopause. Our findings support the complexity of the relationships between breast density, BMI, hormone exposure, and breast cancer risk.
Collapse
Affiliation(s)
- Hannah Lui Park
- Department of Pathology and Laboratory Medicine, University of California, Irvine, CA, USA
- Department of Epidemiology and Biostatistics, University of California, Irvine, CA, USA
| | - Argyrios Ziogas
- Department of Medicine, University of California, Irvine, CA, USA
| | - Stephen A. Feig
- Department of Radiological Sciences, University of California, Irvine, CA, USA
| | - Roza Lorin Kirmizi
- Department of Biological Sciences, University of California, Irvine, CA, USA
| | - Christie Jiwon Lee
- Department of Pharmaceutical Sciences, University of California, Irvine, CA, USA
| | - Andrea Alvarez
- Department of Medicine, University of California, Irvine, CA, USA
| | | | - Deborah Goodman
- Department of Epidemiology and Biostatistics, University of California, Irvine, CA, USA
| | - Kathryn M. Larsen
- Department of Family Medicine, University of California, Irvine, CA, USA
| | - Richard Kelly
- Department of Clinical Informatics, University of California, Irvine, CA, USA
| | | |
Collapse
|
20
|
Tsarouchi MI, Hoxhaj A, Mann RM. New Approaches and Recommendations for Risk-Adapted Breast Cancer Screening. J Magn Reson Imaging 2023; 58:987-1010. [PMID: 37040474 DOI: 10.1002/jmri.28731] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 03/23/2023] [Accepted: 03/24/2023] [Indexed: 04/13/2023] Open
Abstract
Population-based breast cancer screening using mammography as the gold standard imaging modality has been in clinical practice for over 40 years. However, the limitations of mammography in terms of sensitivity and high false-positive rates, particularly in high-risk women, challenge the indiscriminate nature of population-based screening. Additionally, in light of expanding research on new breast cancer risk factors, there is a growing consensus that breast cancer screening should move toward a risk-adapted approach. Recent advancements in breast imaging technology, including contrast material-enhanced mammography (CEM), ultrasound (US) (automated-breast US, Doppler, elastography US), and especially magnetic resonance imaging (MRI) (abbreviated, ultrafast, and contrast-agent free), may provide new opportunities for risk-adapted personalized screening strategies. Moreover, the integration of artificial intelligence and radiomics techniques has the potential to enhance the performance of risk-adapted screening. This review article summarizes the current evidence and challenges in breast cancer screening and highlights potential future perspectives for various imaging techniques in a risk-adapted breast cancer screening approach. EVIDENCE LEVEL: 1. TECHNICAL EFFICACY: Stage 5.
Collapse
Affiliation(s)
- Marialena I Tsarouchi
- Department of Radiology, Nuclear Medicine and Anatomy, Radboud University Medical Center, Nijmegen, the Netherlands
- Department of Radiology, the Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Alma Hoxhaj
- Department of Radiology, Nuclear Medicine and Anatomy, Radboud University Medical Center, Nijmegen, the Netherlands
- Department of Radiology, the Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Ritse M Mann
- Department of Radiology, Nuclear Medicine and Anatomy, Radboud University Medical Center, Nijmegen, the Netherlands
- Department of Radiology, the Netherlands Cancer Institute, Amsterdam, the Netherlands
| |
Collapse
|
21
|
Zhou W, Zhang X, Ding J, Deng L, Cheng G, Wang X. Improved breast lesion detection in mammogram images using a deep neural network. Diagn Interv Radiol 2023; 29:588-595. [PMID: 36994940 PMCID: PMC10679640 DOI: 10.4274/dir.2022.22826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Accepted: 08/27/2022] [Indexed: 01/15/2023]
Abstract
PURPOSE This study aimed to investigate the effect of using a deep neural network (DNN) in breast cancer (BC) detection. METHODS In this retrospective study, a DNN-based model was constructed from a total of 880 mammograms that 220 patients underwent between April and June 2020. The mammograms were reviewed by two senior and two junior radiologists with and without the aid of the DNN model. The performance of the network was assessed by comparing the area under the curve (AUC) and receiver operating characteristic curves for the detection of four features of malignancy (masses, calcifications, asymmetries, and architectural distortions), with and without the aid of the DNN model and by the senior and junior radiologists. Additionally, the effect of utilizing the DNN on diagnosis time for both the senior and junior radiologists was evaluated. RESULTS The AUCs of the model for the detection of mass and calcification were 0.877 and 0.937, respectively. In the senior radiologist group, the AUC values for evaluation of mass, calcification, and asymmetric compaction were significantly higher with the DNN model than those obtained without the model. Similar effects were observed in the junior radiologist group, but the increase in the AUC values was even more dramatic. The median mammogram assessment time of the junior and senior radiologists was 572 (357-951) s, and 273.5 (129-469) s, respectively, with the DNN model, and the corresponding assessment time without the model, was 739 (445-1003) s and 321 (195-491) s, respectively. CONCLUSION The DNN model exhibited high accuracy in detecting the four named features of BC and effectively shortened the review time by both senior and junior radiologists.
Collapse
Affiliation(s)
- Wen Zhou
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Jia Ding
- Beijing Yizhun Medical AI Co., Ltd, Beijing, China
| | - Lingbo Deng
- Department of Radiology, Peking University Shenzhen Hospital, Shenzhen, China
| | - Guanxun Cheng
- Department of Radiology, Peking University Shenzhen Hospital, Shenzhen, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, Beijing, China
| |
Collapse
|
22
|
Sexauer R, Hejduk P, Borkowski K, Ruppert C, Weikert T, Dellas S, Schmidt N. Diagnostic accuracy of automated ACR BI-RADS breast density classification using deep convolutional neural networks. Eur Radiol 2023; 33:4589-4596. [PMID: 36856841 PMCID: PMC10289992 DOI: 10.1007/s00330-023-09474-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 01/17/2023] [Accepted: 01/26/2023] [Indexed: 03/02/2023]
Abstract
OBJECTIVES High breast density is a well-known risk factor for breast cancer. This study aimed to develop and adapt two (MLO, CC) deep convolutional neural networks (DCNN) for automatic breast density classification on synthetic 2D tomosynthesis reconstructions. METHODS In total, 4605 synthetic 2D images (1665 patients, age: 57 ± 37 years) were labeled according to the ACR (American College of Radiology) density (A-D). Two DCNNs with 11 convolutional layers and 3 fully connected layers each, were trained with 70% of the data, whereas 20% was used for validation. The remaining 10% were used as a separate test dataset with 460 images (380 patients). All mammograms in the test dataset were read blinded by two radiologists (reader 1 with two and reader 2 with 11 years of dedicated mammographic experience in breast imaging), and the consensus was formed as the reference standard. The inter- and intra-reader reliabilities were assessed by calculating Cohen's kappa coefficients, and diagnostic accuracy measures of automated classification were evaluated. RESULTS The two models for MLO and CC projections had a mean sensitivity of 80.4% (95%-CI 72.2-86.9), a specificity of 89.3% (95%-CI 85.4-92.3), and an accuracy of 89.6% (95%-CI 88.1-90.9) in the differentiation between ACR A/B and ACR C/D. DCNN versus human and inter-reader agreement were both "substantial" (Cohen's kappa: 0.61 versus 0.63). CONCLUSION The DCNN allows accurate, standardized, and observer-independent classification of breast density based on the ACR BI-RADS system. KEY POINTS • A DCNN performs on par with human experts in breast density assessment for synthetic 2D tomosynthesis reconstructions. • The proposed technique may be useful for accurate, standardized, and observer-independent breast density evaluation of tomosynthesis.
Collapse
Affiliation(s)
- Raphael Sexauer
- Department of Radiology and Nuclear Medicine, University Hospital Basel, Petersgraben 4, CH-4031, Basel, Switzerland.
| | - Patryk Hejduk
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
| | - Karol Borkowski
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
| | - Carlotta Ruppert
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
| | - Thomas Weikert
- Department of Radiology and Nuclear Medicine, University Hospital Basel, Petersgraben 4, CH-4031, Basel, Switzerland
| | - Sophie Dellas
- Department of Radiology and Nuclear Medicine, University Hospital Basel, Petersgraben 4, CH-4031, Basel, Switzerland
| | - Noemi Schmidt
- Department of Radiology and Nuclear Medicine, University Hospital Basel, Petersgraben 4, CH-4031, Basel, Switzerland
| |
Collapse
|
23
|
Hwang I, Trivedi H, Brown-Mulry B, Zhang L, Nalla V, Gastounioti A, Gichoya J, Seyyed-Kalantari L, Banerjee I, Woo M. Impact of multi-source data augmentation on performance of convolutional neural networks for abnormality classification in mammography. FRONTIERS IN RADIOLOGY 2023; 3:1181190. [PMID: 37588666 PMCID: PMC10426498 DOI: 10.3389/fradi.2023.1181190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 05/30/2023] [Indexed: 08/18/2023]
Abstract
Introduction To date, most mammography-related AI models have been trained using either film or digital mammogram datasets with little overlap. We investigated whether or not combining film and digital mammography during training will help or hinder modern models designed for use on digital mammograms. Methods To this end, a total of six binary classifiers were trained for comparison. The first three classifiers were trained using images only from Emory Breast Imaging Dataset (EMBED) using ResNet50, ResNet101, and ResNet152 architectures. The next three classifiers were trained using images from EMBED, Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM), and Digital Database for Screening Mammography (DDSM) datasets. All six models were tested only on digital mammograms from EMBED. Results The results showed that performance degradation to the customized ResNet models was statistically significant overall when EMBED dataset was augmented with CBIS-DDSM/DDSM. While the performance degradation was observed in all racial subgroups, some races are subject to more severe performance drop as compared to other races. Discussion The degradation may potentially be due to ( 1) a mismatch in features between film-based and digital mammograms ( 2) a mismatch in pathologic and radiological information. In conclusion, use of both film and digital mammography during training may hinder modern models designed for breast cancer screening. Caution is required when combining film-based and digital mammograms or when utilizing pathologic and radiological information simultaneously.
Collapse
Affiliation(s)
- InChan Hwang
- School of Data Science and Analytics, Kennesaw State University, Kennesaw, GA, United States
| | - Hari Trivedi
- Department of Radiology, Emory University, Atlanta, GA, United States
| | - Beatrice Brown-Mulry
- School of Data Science and Analytics, Kennesaw State University, Kennesaw, GA, United States
| | - Linglin Zhang
- School of Data Science and Analytics, Kennesaw State University, Kennesaw, GA, United States
| | - Vineela Nalla
- Department of Information Technology, Kennesaw State University, Kennesaw, GA, United States
| | - Aimilia Gastounioti
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, MO, United States
| | - Judy Gichoya
- Department of Radiology, Emory University, Atlanta, GA, United States
| | - Laleh Seyyed-Kalantari
- Department of Electrical Engineering and Computer Science, York University, Toronto, ON, Canada
| | - Imon Banerjee
- Department of Radiology, Mayo Clinic Arizona, Phoenix, AZ, United States
| | - MinJae Woo
- School of Data Science and Analytics, Kennesaw State University, Kennesaw, GA, United States
| |
Collapse
|
24
|
Tong Y, Jie B, Wang X, Xu Z, Ding P, He Y. Is Convolutional Neural Network Accurate for Automatic Detection of Zygomatic Fractures on Computed Tomography? J Oral Maxillofac Surg 2023:S0278-2391(23)00393-2. [PMID: 37217163 DOI: 10.1016/j.joms.2023.04.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 03/29/2023] [Accepted: 04/23/2023] [Indexed: 05/24/2023]
Abstract
PURPOSE Zygomatic fractures involve complex anatomical structures of the mid-face and the diagnosis can be challenging and labor-consuming. This research is aimed to evaluate the performance of an automatic algorithm for the detection of zygomatic fractures based on convolutional neural network (CNN) on spiral computed tomography (CT). MATERIALS AND METHODS We designed a cross-sectional retrospective diagnostic trial study. Clinical records and CT scans of patients with zygomatic fractures were reviewed. The sample consisted of two types of patients with different zygomatic fractures statuses (positive or negative) in Peking University School of Stomatology from 2013 to 2019. All CT samples were randomly divided into three groups at a ratio of 6:2:2 as training set, validation set, and test set, respectively. All CT scans were viewed and annotated by three experienced maxillofacial surgeons, serving as the gold standard. The algorithm consisted of two modules as follows: (1) segmentation of the zygomatic region of CT based on U-Net, a type of CNN model; (2) detection of fractures based on Deep Residual Network 34(ResNet34). The region segmentation model was used first to detect and extract the zygomatic region, then the detection model was used to detect the fracture status. The Dice coefficient was used to evaluate the performance of the segmentation algorithm. The sensitivity and specificity were used to assess the performance of the detection model. The covariates included age, gender, duration of injury, and the etiology of fractures. RESULTS A total of 379 patients with an average age of 35.43 ± 12.74 years were included in the study. There were 203 nonfracture patients and 176 fracture patients with 220 sites of zygomatic fractures (44 patients underwent bilateral fractures). The Dice coefficientof zygomatic region detection model and gold standard verified by manual labeling were 0.9337 (coronal plane) and 0.9269 (sagittal plane), respectively. The sensitivity and specificity of the fracture detection model were 100% (p>.05). CONCLUSION The performance of the algorithm based on CNNs was not statistically different from the gold standard (manual diagnosis) for zygomatic fracture detection in order for the algorithm to be applied clinically.
Collapse
Affiliation(s)
- Yanhang Tong
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology; National Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital Stomatology; National Clinical Research Center for Oral Diseases, Beijing, China
| | - Bimeng Jie
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology; National Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital Stomatology; National Clinical Research Center for Oral Diseases, Beijing, China
| | - Xuebing Wang
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology; National Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital Stomatology; National Clinical Research Center for Oral Diseases, Beijing, China
| | | | | | - Yang He
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology; National Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital Stomatology; National Clinical Research Center for Oral Diseases, Beijing, China.
| |
Collapse
|
25
|
Lin X, Wu S, Li L, Ouyang R, Ma J, Yi C, Tang Y. Automatic mammographic breast density classification in Chinese women: clinical validation of a deep learning model. Acta Radiol 2023; 64:1823-1830. [PMID: 36683330 DOI: 10.1177/02841851231152097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
BACKGROUND High breast density is a strong risk factor for breast cancer. As such, high consistency and accuracy in breast density assessment is necessary. PURPOSE To validate our proposed deep learning (DL) model and explore its impact on radiologists on density assessments. MATERIAL AND METHODS A total of 3732 mammographic cases were collected as a validated set: 1686 cases before the implementation of the DL model and 2046 cases after the DL model. Five radiologists were divided into two groups (junior and senior groups) to assess all mammograms using either two- or four-category evaluation. Linear-weighted kappa (K) and intraclass correlation coefficient (ICC) statistics were used to analyze the consistency between radiologists before and after implementation of the DL model. RESULTS The accuracy and clinical acceptance of the DL model for the junior group were 96.3% and 96.8% for two-category evaluation, and 85.6% and 89.6% for four-category evaluation, respectively. For the senior group, the accuracy and clinical acceptance were 95.5% and 98.0% for two-category evaluation, and 84.3% and 95.3% for four-category evaluation, respectively. The consistency within the junior group, the senior group, and among all radiologists improved with the help of the DL model. For two-category, their K and ICC values improved to 0.81, 0.81, and 0.80 from 0.73, 0.75, and 0.76. And for four-category, their K and ICC values improved to 0.81, 0.82, and 0.82 from 0.73, 0.79, and 0.78, respectively. CONCLUSION The DL model showed high accuracy and clinical acceptance in breast density categories. It is helpful to improve radiologists' consistency.
Collapse
Affiliation(s)
- Xiaohui Lin
- Department of Radiology, 12387Shenzhen People's Hospital, the Second Clinical Medical College of Jinan University, Shenzhen, PR China
| | - Shibin Wu
- 537598Ping-An Technology, Shenzhen China, Shenzhen, PR China
| | - Lin Li
- Department of Radiology, 12387Shenzhen People's Hospital, the Second Clinical Medical College of Jinan University, Shenzhen, PR China
| | - Rushan Ouyang
- Department of Radiology, 12387Shenzhen People's Hospital, the Second Clinical Medical College of Jinan University, Shenzhen, PR China
| | - Jie Ma
- Department of Radiology, 12387Shenzhen People's Hospital, the Second Clinical Medical College of Jinan University, Shenzhen, PR China
| | - Chunyan Yi
- Department of Radiology, 12387Shenzhen People's Hospital, the Second Clinical Medical College of Jinan University, Shenzhen, PR China
| | - Yuxing Tang
- 537598Ping-An Technology, Shenzhen China, Shenzhen, PR China
| |
Collapse
|
26
|
Neural Network in the Analysis of the MR Signal as an Image Segmentation Tool for the Determination of T 1 and T 2 Relaxation Times with Application to Cancer Cell Culture. Int J Mol Sci 2023; 24:ijms24021554. [PMID: 36675075 PMCID: PMC9861169 DOI: 10.3390/ijms24021554] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/31/2022] [Accepted: 01/03/2023] [Indexed: 01/14/2023] Open
Abstract
Artificial intelligence has been entering medical research. Today, manufacturers of diagnostic instruments are including algorithms based on neural networks. Neural networks are quickly entering all branches of medical research and beyond. Analyzing the PubMed database from the last 5 years (2017 to 2021), we see that the number of responses to the query "neural network in medicine" exceeds 10,500 papers. Deep learning algorithms are of particular importance in oncology. This paper presents the use of neural networks to analyze the magnetic resonance imaging (MRI) images used to determine MRI relaxometry of the samples. Relaxometry is becoming an increasingly common tool in diagnostics. The aim of this work was to optimize the processing time of DICOM images by using a neural network implemented in the MATLAB package by The MathWorks with the patternnet function. The application of a neural network helps to eliminate spaces in which there are no objects with characteristics matching the phenomenon of longitudinal or transverse MRI relaxation. The result of this work is the elimination of aerated spaces in MRI images. The whole algorithm was implemented as an application in the MATLAB package.
Collapse
|
27
|
Sundell VM, Mäkelä T, Vitikainen AM, Kaasalainen T. Convolutional neural network -based phantom image scoring for mammography quality control. BMC Med Imaging 2022; 22:216. [PMID: 36476319 PMCID: PMC9727908 DOI: 10.1186/s12880-022-00944-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 11/28/2022] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Visual evaluation of phantom images is an important, but time-consuming part of mammography quality control (QC). Consistent scoring of phantom images over the device's lifetime is highly desirable. Recently, convolutional neural networks (CNNs) have been applied to a wide range of image classification problems, performing with a high accuracy. The purpose of this study was to automate mammography QC phantom scoring task by training CNN models to mimic a human reviewer. METHODS Eight CNN variations consisting of three to ten convolutional layers were trained for detecting targets (fibres, microcalcifications and masses) in American College of Radiology (ACR) accreditation phantom images and the results were compared with human scoring. Regular and artificially degraded/improved QC phantom images from eight mammography devices were visually evaluated by one reviewer. These images were used in training the CNN models. A separate test set consisted of daily QC images from the eight devices and separately acquired images with varying dose levels. These were scored by four reviewers and considered the ground truth for CNN performance testing. RESULTS Although hyper-parameter search space was limited, an optimal network depth after which additional layers resulted in decreased accuracy was identified. The highest scoring accuracy (95%) was achieved with the CNN consisting of six convolutional layers. The highest deviation between the CNN and the reviewers was found at lowest dose levels. No significant difference emerged between the visual reviews and CNN results except in case of smallest masses. CONCLUSION A CNN-based automatic mammography QC phantom scoring system can score phantom images in a good agreement with human reviewers, and can therefore be of benefit in mammography QC.
Collapse
Affiliation(s)
- Veli-Matti Sundell
- grid.7737.40000 0004 0410 2071Department of Physics, University of Helsinki, P.O. Box 64, 00014 Helsinki, Finland ,grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Teemu Mäkelä
- grid.7737.40000 0004 0410 2071Department of Physics, University of Helsinki, P.O. Box 64, 00014 Helsinki, Finland ,grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Anne-Mari Vitikainen
- grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Touko Kaasalainen
- grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| |
Collapse
|
28
|
Artificial intelligence and machine learning in cancer imaging. COMMUNICATIONS MEDICINE 2022; 2:133. [PMID: 36310650 PMCID: PMC9613681 DOI: 10.1038/s43856-022-00199-0] [Citation(s) in RCA: 107] [Impact Index Per Article: 35.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2020] [Accepted: 10/06/2022] [Indexed: 11/16/2022] Open
Abstract
An increasing array of tools is being developed using artificial intelligence (AI) and machine learning (ML) for cancer imaging. The development of an optimal tool requires multidisciplinary engagement to ensure that the appropriate use case is met, as well as to undertake robust development and testing prior to its adoption into healthcare systems. This multidisciplinary review highlights key developments in the field. We discuss the challenges and opportunities of AI and ML in cancer imaging; considerations for the development of algorithms into tools that can be widely used and disseminated; and the development of the ecosystem needed to promote growth of AI and ML in cancer imaging.
Collapse
|
29
|
Jones MA, Islam W, Faiz R, Chen X, Zheng B. Applying artificial intelligence technology to assist with breast cancer diagnosis and prognosis prediction. Front Oncol 2022; 12:980793. [PMID: 36119479 PMCID: PMC9471147 DOI: 10.3389/fonc.2022.980793] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 08/04/2022] [Indexed: 12/27/2022] Open
Abstract
Breast cancer remains the most diagnosed cancer in women. Advances in medical imaging modalities and technologies have greatly aided in the early detection of breast cancer and the decline of patient mortality rates. However, reading and interpreting breast images remains difficult due to the high heterogeneity of breast tumors and fibro-glandular tissue, which results in lower cancer detection sensitivity and specificity and large inter-reader variability. In order to help overcome these clinical challenges, researchers have made great efforts to develop computer-aided detection and/or diagnosis (CAD) schemes of breast images to provide radiologists with decision-making support tools. Recent rapid advances in high throughput data analysis methods and artificial intelligence (AI) technologies, particularly radiomics and deep learning techniques, have led to an exponential increase in the development of new AI-based models of breast images that cover a broad range of application topics. In this review paper, we focus on reviewing recent advances in better understanding the association between radiomics features and tumor microenvironment and the progress in developing new AI-based quantitative image feature analysis models in three realms of breast cancer: predicting breast cancer risk, the likelihood of tumor malignancy, and tumor response to treatment. The outlook and three major challenges of applying new AI-based models of breast images to clinical practice are also discussed. Through this review we conclude that although developing new AI-based models of breast images has achieved significant progress and promising results, several obstacles to applying these new AI-based models to clinical practice remain. Therefore, more research effort is needed in future studies.
Collapse
Affiliation(s)
- Meredith A. Jones
- School of Biomedical Engineering, University of Oklahoma, Norman, OK, United States
| | - Warid Islam
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, United States
| | - Rozwat Faiz
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, United States
| | - Xuxin Chen
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, United States
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, United States
| |
Collapse
|
30
|
Huang X, Chen X, Chen X, Wang W. Screening of Serum miRNAs as Diagnostic Biomarkers for Lung Cancer Using the Minimal-Redundancy-Maximal-Relevance Algorithm and Random Forest Classifier Based on a Public Database. Public Health Genomics 2022; 25:1-9. [PMID: 35917800 DOI: 10.1159/000525316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 05/12/2022] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Lung cancer is one of the deadliest cancers, early diagnosis of which can efficiently enhance patient's survival. We aimed to screening out the serum miRNAs as diagnostic biomarkers for patients with lung cancer. METHODS A total of 416 remarkably differentially expressed miRNAs were acquired using the limma package, and next feature ranking was derived by the minimal-redundancy-maximal-relevance method. An incremental feature selection algorithm of a random forest (RF) classifier was utilized to choose the top 5 miRNA combination with the optimum predictive performance. The performance of the RF classifier of top 5 miRNAs was analyzed using the receiver operator characteristic (ROC) curve. Afterward, the classification effect of the 5-miRNA combination was validated through principal component analysis and hierarchical clustering analysis. Analysis of top 5 miRNA expressions between lung cancer patients and normal people was performed based on GSE137140 dataset, and their expression was validated by qPCR. The hierarchical clustering analysis was used to analyze the similarity of 5 miRNAs expression profiles. ROC analysis was undertaken on each miRNA. RESULTS We acquired top 5 miRNAs finally, with the Matthews correlation coefficient value as 0.988 and the area under the curve (AUC) value as 0.996. The 5 feature miRNAs were capable of distinguishing most cancer patients and normal people. Furthermore, except for the lowly expressed miR-6875-5p in lung cancer tissue, the other 4 miRNAs all expressed highly in cancer patients. Performance analysis revealed that their AUC values were 0.92, 0.96, 0.94, 0.95, and 0.93, respectively. CONCLUSION By and large, the 5 feature miRNAs screened here were anticipated to be effective biomarkers for lung cancer.
Collapse
Affiliation(s)
- Xiaoyan Huang
- Medical Oncology, 900 Hospital of the Joint Logistics Team, Fuzhou, China
| | - Xiong Chen
- Medical Oncology, 900 Hospital of the Joint Logistics Team, Fuzhou, China
| | - Xi Chen
- Medical Oncology, 900 Hospital of the Joint Logistics Team, Fuzhou, China
| | - Wenling Wang
- Medical Oncology, 900 Hospital of the Joint Logistics Team, Fuzhou, China
| |
Collapse
|
31
|
|
32
|
Ukwuoma CC, Hossain MA, Jackson JK, Nneji GU, Monday HN, Qin Z. Multi-Classification of Breast Cancer Lesions in Histopathological Images Using DEEP_Pachi: Multiple Self-Attention Head. Diagnostics (Basel) 2022; 12:1152. [PMID: 35626307 PMCID: PMC9139754 DOI: 10.3390/diagnostics12051152] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 04/23/2022] [Accepted: 04/28/2022] [Indexed: 11/16/2022] Open
Abstract
INTRODUCTION AND BACKGROUND Despite fast developments in the medical field, histological diagnosis is still regarded as the benchmark in cancer diagnosis. However, the input image feature extraction that is used to determine the severity of cancer at various magnifications is harrowing since manual procedures are biased, time consuming, labor intensive, and error-prone. Current state-of-the-art deep learning approaches for breast histopathology image classification take features from entire images (generic features). Thus, they are likely to overlook the essential image features for the unnecessary features, resulting in an incorrect diagnosis of breast histopathology imaging and leading to mortality. METHODS This discrepancy prompted us to develop DEEP_Pachi for classifying breast histopathology images at various magnifications. The suggested DEEP_Pachi collects global and regional features that are essential for effective breast histopathology image classification. The proposed model backbone is an ensemble of DenseNet201 and VGG16 architecture. The ensemble model extracts global features (generic image information), whereas DEEP_Pachi extracts spatial information (regions of interest). Statistically, the evaluation of the proposed model was performed on publicly available dataset: BreakHis and ICIAR 2018 Challenge datasets. RESULTS A detailed evaluation of the proposed model's accuracy, sensitivity, precision, specificity, and f1-score metrics revealed the usefulness of the backbone model and the DEEP_Pachi model for image classifying. The suggested technique outperformed state-of-the-art classifiers, achieving an accuracy of 1.0 for the benign class and 0.99 for the malignant class in all magnifications of BreakHis datasets and an accuracy of 1.0 on the ICIAR 2018 Challenge dataset. CONCLUSIONS The acquired findings were significantly resilient and proved helpful for the suggested system to assist experts at big medical institutions, resulting in early breast cancer diagnosis and a reduction in the death rate.
Collapse
Affiliation(s)
- Chiagoziem C. Ukwuoma
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| | - Md Altab Hossain
- School of Management and Economics, University of Electronic Science and Technology of China, Chengdu 610054, China;
| | - Jehoiada K. Jackson
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| | - Grace U. Nneji
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| | - Happy N. Monday
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China;
| | - Zhiguang Qin
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| |
Collapse
|
33
|
Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging 2022; 22:69. [PMID: 35418051 PMCID: PMC9007400 DOI: 10.1186/s12880-022-00793-7] [Citation(s) in RCA: 183] [Impact Index Per Article: 61.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 03/30/2022] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.
Collapse
Affiliation(s)
- Hee E Kim
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Alejandro Cosa-Linan
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Nandhini Santhanam
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mahboubeh Jannesari
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mate E Maros
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Thomas Ganslandt
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Wetterkreuz 15, 91058, Erlangen, Germany
| |
Collapse
|
34
|
Overview of Deep Learning Models in Biomedical Domain with the Help of R Statistical Software. SERBIAN JOURNAL OF EXPERIMENTAL AND CLINICAL RESEARCH 2022. [DOI: 10.2478/sjecr-2018-0063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Abstract
With the increase in volume of data and presence of structured and unstructured data in the biomedical filed, there is a need for building models which can handle complex & non-linear relations in the data and also predict and classify outcomes with higher accuracy. Deep learning models are one of such models which can handle complex and nonlinear data and are being increasingly used in the biomedical filed in the recent years. Deep learning methodology evolved from artificial neural networks which process the input data through multiple hidden layers with higher level of abstraction. Deep Learning networks are used in various fields such as image processing, speech recognition, fraud deduction, classification and prediction. Objectives of this paper is to provide an overview of Deep Learning Models and its application in the biomedical domain using R Statistical software Deep Learning concepts are illustrated by using the R statistical software package. X-ray Images from NIH datasets used to explain the prediction accuracy of the deep learning models. Deep Learning models helped to classify the outcomes under study with 91% accuracy. The paper provided an overview of Deep Learning Models, its types, its application in biomedical domain. - is paper has shown the effect of deep learning network in classifying images into normal and disease with 91% accuracy with help of the R statistical package.
Collapse
|
35
|
Wimmer M, Sluiter G, Major D, Lenis D, Berg A, Neubauer T, Buhler K. Multi-Task Fusion for Improving Mammography Screening Data Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:937-950. [PMID: 34788218 DOI: 10.1109/tmi.2021.3129068] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Machine learning and deep learning methods have become essential for computer-assisted prediction in medicine, with a growing number of applications also in the field of mammography. Typically these algorithms are trained for a specific task, e.g., the classification of lesions or the prediction of a mammogram's pathology status. To obtain a comprehensive view of a patient, models which were all trained for the same task(s) are subsequently ensembled or combined. In this work, we propose a pipeline approach, where we first train a set of individual, task-specific models and subsequently investigate the fusion thereof, which is in contrast to the standard model ensembling strategy. We fuse model predictions and high-level features from deep learning models with hybrid patient models to build stronger predictors on patient level. To this end, we propose a multi-branch deep learning model which efficiently fuses features across different tasks and mammograms to obtain a comprehensive patient-level prediction. We train and evaluate our full pipeline on public mammography data, i.e., DDSM and its curated version CBIS-DDSM, and report an AUC score of 0.962 for predicting the presence of any lesion and 0.791 for predicting the presence of malignant lesions on patient level. Overall, our fusion approaches improve AUC scores significantly by up to 0.04 compared to standard model ensembling. Moreover, by providing not only global patient-level predictions but also task-specific model results that are related to radiological features, our pipeline aims to closely support the reading workflow of radiologists.
Collapse
|
36
|
Li H, Mukundan R, Boyd S. Spatial Distribution Analysis of Novel Texture Feature Descriptors for Accurate Breast Density Classification. SENSORS 2022; 22:s22072672. [PMID: 35408286 PMCID: PMC9002800 DOI: 10.3390/s22072672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 03/27/2022] [Accepted: 03/28/2022] [Indexed: 12/10/2022]
Abstract
Breast density has been recognised as an important biomarker that indicates the risk of developing breast cancer. Accurate classification of breast density plays a crucial role in developing a computer-aided detection (CADe) system for mammogram interpretation. This paper proposes a novel texture descriptor, namely, rotation invariant uniform local quinary patterns (RIU4-LQP), to describe texture patterns in mammograms and to improve the robustness of image features. In conventional processing schemes, image features are obtained by computing histograms from texture patterns. However, such processes ignore very important spatial information related to the texture features. This study designs a new feature vector, namely, K-spectrum, by using Baddeley's K-inhom function to characterise the spatial distribution information of feature point sets. Texture features extracted by RIU4-LQP and K-spectrum are utilised to classify mammograms into BI-RADS density categories. Three feature selection methods are employed to optimise the feature set. In our experiment, two mammogram datasets, INbreast and MIAS, are used to test the proposed methods, and comparative analyses and statistical tests between different schemes are conducted. Experimental results show that our proposed method outperforms other approaches described in the literature, with the best classification accuracy of 92.76% (INbreast) and 86.96% (MIAS).
Collapse
Affiliation(s)
- Haipeng Li
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8140, New Zealand;
- Correspondence:
| | - Ramakrishnan Mukundan
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8140, New Zealand;
| | - Shelley Boyd
- Canterbury Breastcare, St. George’s Medical Centre, Christchurch 8014, New Zealand;
| |
Collapse
|
37
|
Tadavarthi Y, Makeeva V, Wagstaff W, Zhan H, Podlasek A, Bhatia N, Heilbrun M, Krupinski E, Safdar N, Banerjee I, Gichoya J, Trivedi H. Overview of Noninterpretive Artificial Intelligence Models for Safety, Quality, Workflow, and Education Applications in Radiology Practice. Radiol Artif Intell 2022; 4:e210114. [PMID: 35391770 DOI: 10.1148/ryai.210114] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 12/17/2021] [Accepted: 01/11/2022] [Indexed: 12/17/2022]
Abstract
Artificial intelligence has become a ubiquitous term in radiology over the past several years, and much attention has been given to applications that aid radiologists in the detection of abnormalities and diagnosis of diseases. However, there are many potential applications related to radiologic image quality, safety, and workflow improvements that present equal, if not greater, value propositions to radiology practices, insurance companies, and hospital systems. This review focuses on six major categories for artificial intelligence applications: study selection and protocoling, image acquisition, worklist prioritization, study reporting, business applications, and resident education. All of these categories can substantially affect different aspects of radiology practices and workflows. Each of these categories has different value propositions in terms of whether they could be used to increase efficiency, improve patient safety, increase revenue, or save costs. Each application is covered in depth in the context of both current and future areas of work. Keywords: Use of AI in Education, Application Domain, Supervised Learning, Safety © RSNA, 2022.
Collapse
Affiliation(s)
- Yasasvi Tadavarthi
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Valeria Makeeva
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - William Wagstaff
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Henry Zhan
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Anna Podlasek
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Neil Bhatia
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Marta Heilbrun
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Elizabeth Krupinski
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Nabile Safdar
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Imon Banerjee
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Judy Gichoya
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Hari Trivedi
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| |
Collapse
|
38
|
Gastounioti A, Desai S, Ahluwalia VS, Conant EF, Kontos D. Artificial intelligence in mammographic phenotyping of breast cancer risk: a narrative review. Breast Cancer Res 2022; 24:14. [PMID: 35184757 PMCID: PMC8859891 DOI: 10.1186/s13058-022-01509-z] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 02/08/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Improved breast cancer risk assessment models are needed to enable personalized screening strategies that achieve better harm-to-benefit ratio based on earlier detection and better breast cancer outcomes than existing screening guidelines. Computational mammographic phenotypes have demonstrated a promising role in breast cancer risk prediction. With the recent exponential growth of computational efficiency, the artificial intelligence (AI) revolution, driven by the introduction of deep learning, has expanded the utility of imaging in predictive models. Consequently, AI-based imaging-derived data has led to some of the most promising tools for precision breast cancer screening. MAIN BODY This review aims to synthesize the current state-of-the-art applications of AI in mammographic phenotyping of breast cancer risk. We discuss the fundamentals of AI and explore the computing advancements that have made AI-based image analysis essential in refining breast cancer risk assessment. Specifically, we discuss the use of data derived from digital mammography as well as digital breast tomosynthesis. Different aspects of breast cancer risk assessment are targeted including (a) robust and reproducible evaluations of breast density, a well-established breast cancer risk factor, (b) assessment of a woman's inherent breast cancer risk, and (c) identification of women who are likely to be diagnosed with breast cancers after a negative or routine screen due to masking or the rapid and aggressive growth of a tumor. Lastly, we discuss AI challenges unique to the computational analysis of mammographic imaging as well as future directions for this promising research field. CONCLUSIONS We provide a useful reference for AI researchers investigating image-based breast cancer risk assessment while indicating key priorities and challenges that, if properly addressed, could accelerate the implementation of AI-assisted risk stratification to future refine and individualize breast cancer screening strategies.
Collapse
Affiliation(s)
- Aimilia Gastounioti
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.,Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Shyam Desai
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Vinayak S Ahluwalia
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.,Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Emily F Conant
- Department of Radiology, Hospital of the University of Pennsylvania, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Despina Kontos
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.
| |
Collapse
|
39
|
Yousef R, Gupta G, Yousef N, Khari M. A holistic overview of deep learning approach in medical imaging. MULTIMEDIA SYSTEMS 2022; 28:881-914. [PMID: 35079207 PMCID: PMC8776556 DOI: 10.1007/s00530-021-00884-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 12/23/2021] [Indexed: 05/07/2023]
Abstract
Medical images are a rich source of invaluable necessary information used by clinicians. Recent technologies have introduced many advancements for exploiting the most of this information and use it to generate better analysis. Deep learning (DL) techniques have been empowered in medical images analysis using computer-assisted imaging contexts and presenting a lot of solutions and improvements while analyzing these images by radiologists and other specialists. In this paper, we present a survey of DL techniques used for variety of tasks along with the different medical image's modalities to provide critical review of the recent developments in this direction. We have organized our paper to provide significant contribution of deep leaning traits and learn its concepts, which is in turn helpful for non-expert in medical society. Then, we present several applications of deep learning (e.g., segmentation, classification, detection, etc.) which are commonly used for clinical purposes for different anatomical site, and we also present the main key terms for DL attributes like basic architecture, data augmentation, transfer learning, and feature selection methods. Medical images as inputs to deep learning architectures will be the mainstream in the coming years, and novel DL techniques are predicted to be the core of medical images analysis. We conclude our paper by addressing some research challenges and the suggested solutions for them found in literature, and also future promises and directions for further developments.
Collapse
Affiliation(s)
- Rammah Yousef
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Gaurav Gupta
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Nabhan Yousef
- Electronics and Communication Engineering, Marwadi University, Rajkot, Gujrat India
| | - Manju Khari
- Jawaharlal Nehru University, New Delhi, India
| |
Collapse
|
40
|
Tardy M, Mateus D. Leveraging Multi-Task Learning to Cope With Poor and Missing Labels of Mammograms. FRONTIERS IN RADIOLOGY 2022; 1:796078. [PMID: 37492176 PMCID: PMC10365086 DOI: 10.3389/fradi.2021.796078] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 12/06/2021] [Indexed: 07/27/2023]
Abstract
In breast cancer screening, binary classification of mammograms is a common task aiming to determine whether a case is malignant or benign. A Computer-Aided Diagnosis (CADx) system based on a trainable classifier requires clean data and labels coming from a confirmed diagnosis. Unfortunately, such labels are not easy to obtain in clinical practice, since the histopathological reports of biopsy may not be available alongside mammograms, while normal cases may not have an explicit follow-up confirmation. Such ambiguities result either in reducing the number of samples eligible for training or in a label uncertainty that may decrease the performances. In this work, we maximize the number of samples for training relying on multi-task learning. We design a deep-neural-network-based classifier yielding multiple outputs in one forward pass. The predicted classes include binary malignancy, cancer probability estimation, breast density, and image laterality. Since few samples have all classes available and confirmed, we propose to introduce the uncertainty related to the classes as a per-sample weight during training. Such weighting prevents updating the network's parameters when training on uncertain or missing labels. We evaluate our approach on the public INBreast and private datasets, showing statistically significant improvements compared to baseline and independent state-of-the-art approaches. Moreover, we use mammograms from Susan G. Komen Tissue Bank for fine-tuning, further demonstrating the ability to improve the performances in our multi-task learning setup from raw clinical data. We achieved the binary classification performance of AUC = 80.46 on our private dataset and AUC = 85.23 on the INBreast dataset.
Collapse
Affiliation(s)
- Mickael Tardy
- Ecole Centrale de Nantes, LS2N, UMR CNRS 6004, Nantes, France
- Hera-MI SAS, Saint-Herblain, France
| | - Diana Mateus
- Ecole Centrale de Nantes, LS2N, UMR CNRS 6004, Nantes, France
| |
Collapse
|
41
|
Lee SE, Son NH, Kim MH, Kim EK. Mammographic Density Assessment by Artificial Intelligence-Based Computer-Assisted Diagnosis: A Comparison with Automated Volumetric Assessment. J Digit Imaging 2022; 35:173-179. [PMID: 35015180 PMCID: PMC8921363 DOI: 10.1007/s10278-021-00555-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 10/05/2021] [Accepted: 11/21/2021] [Indexed: 10/19/2022] Open
Abstract
We evaluated and compared the mammographic density assessment of an artificial intelligence-based computer-assisted diagnosis (AI-CAD) program using inter-rater agreements between radiologists and an automated density assessment program. Between March and May 2020, 488 consecutive mammograms of 488 patients (56.2 ± 10.9 years) were collected from a single institution. We assigned four classes of mammographic density based on BI-RADS (Breast Imaging Reporting and Data System) using commercial AI-CAD (Lunit INSIGHT MMG), and compared inter-rater agreements between radiologists, AI-CAD, and another commercial automated density assessment program (Volpara®). The inter-rater agreement between AI-CAD and the reader consensus was 0.52 with a matched rate of 68.2% (333/488). The inter-rater agreement between Volpara® and the reader consensus was similar to AI-CAD at 0.50 with a matched rate of 62.7% (306/488). The inter-rater agreement between AI-CAD and Volpara® was 0.54 with a matched rate of 61.5% (300/488). In conclusion, density assessments by AI-CAD showed fair agreement with those of radiologists, similar to the agreement between the commercial automated density assessment program and radiologists.
Collapse
Affiliation(s)
- Si Eun Lee
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Gyeonggi-do, Korea
| | - Nak-Hoon Son
- Division of Biostatistics, Yongin Severance Hospital, Yonsei University College of Medicine, Gyeonggi-do, Yongin, Republic of Korea
| | - Myung Hyun Kim
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Gyeonggi-do, Korea
| | - Eun-Kyung Kim
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Gyeonggi-do, Korea.
| |
Collapse
|
42
|
Kumar I, Kumar A, Kumar VDA, Kannan R, Vimal V, Singh KU, Mahmud M. Dense Tissue Pattern Characterization Using Deep Neural Network. Cognit Comput 2022. [DOI: 10.1007/s12559-021-09970-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
AbstractBreast tumors are from the common infections among women around the world. Classifying the various types of breast tumors contribute to treating breast tumors more efficiently. However, this classification task is often hindered by dense tissue patterns captured in mammograms. The present study has been proposed a dense tissue pattern characterization framework using deep neural network. A total of 322 mammograms belonging to the mini-MIAS dataset and 4880 mammograms from DDSM dataset have been taken, and an ROI of fixed size 224 × 224 pixels from each mammogram has been extracted. In this work, tedious experimentation has been executed using different combinations of training and testing sets using different activation function with AlexNet, ResNet-18 model. Data augmentation has been used to create a similar type of virtual image for proper training of the DL model. After that, the testing set is applied on the trained model to validate the proposed model. During experiments, four different activation functions ‘sigmoid’, ‘tanh’, ‘ReLu’, and ‘leakyReLu’ are used, and the outcome for each function has been reported. It has been found that activation function ‘ReLu’ perform always outstanding with respect to others. For each experiment, classification accuracy and kappa coefficient have been computed. The obtained accuracy and kappa value for MIAS dataset using ResNet-18 model is 91.3% and 0.803, respectively. For DDSM dataset, the accuracy of 92.3% and kappa coefficient value of 0.846 are achieved. After the combination of both dataset images, the achieved accuracy is 91.9%, and kappa coefficient value is 0.839 using ResNet-18 model. Finally, it has been concluded that the ResNet-18 model and ReLu activation function yield outstanding performance for the task.
Collapse
|
43
|
Shah SM, Khan RA, Arif S, Sajid U. Artificial intelligence for breast cancer analysis: Trends & directions. Comput Biol Med 2022; 142:105221. [PMID: 35016100 DOI: 10.1016/j.compbiomed.2022.105221] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Revised: 01/03/2022] [Accepted: 01/03/2022] [Indexed: 12/18/2022]
Abstract
Breast cancer is one of the leading causes of death among women. Early detection of breast cancer can significantly improve the lives of millions of women across the globe. Given importance of finding solution/framework for early detection and diagnosis, recently many AI researchers are focusing to automate this task. The other reasons for surge in research activities in this direction are advent of robust AI algorithms (deep learning), availability of hardware that can run/train those robust and complex AI algorithms and accessibility of large enough dataset required for training AI algorithms. Different imaging modalities that have been exploited by researchers to automate the task of breast cancer detection are mammograms, ultrasound, magnetic resonance imaging, histopathological images or any combination of them. This article analyzes these imaging modalities and presents their strengths and limitations. It also enlists resources from where their datasets can be accessed for research purpose. This article then summarizes AI and computer vision based state-of-the-art methods proposed in the last decade to detect breast cancer using various imaging modalities. Primarily, in this article we have focused on reviewing frameworks that have reported results using mammograms as it is the most widely used breast imaging modality that serves as the first test that medical practitioners usually prescribe for the detection of breast cancer. Another reason for focusing on mammogram imaging modalities is the availability of its labelled datasets. Datasets availability is one of the most important aspects for the development of AI based frameworks as such algorithms are data hungry and generally quality of dataset affects performance of AI based algorithms. In a nutshell, this research article will act as a primary resource for the research community working in the field of automated breast imaging analysis.
Collapse
Affiliation(s)
- Shahid Munir Shah
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Rizwan Ahmed Khan
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan.
| | - Sheeraz Arif
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Unaiza Sajid
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| |
Collapse
|
44
|
Abstract
In screening for breast cancer (BC), mammographic breast density (MBD) is a powerful risk factor that increases breast carcinogenesis and synergistically reduces the sensitivity of mammography. It also reduces specificity of lesion identification, leading to recalls, additional testing, and delayed and later-stage diagnoses, which result in increased health care costs. These findings provide the foundation for dense breast notification laws and lead to the increase in patient and provider interest in MBD. However, unlike other risk factors for BC, MBD is dynamic through a woman’s lifetime and is modifiable. Although MBD is known to change as a result of factors such as reproductive history and hormonal status, few conclusions have been reached for lifestyle factors such as alcohol, diet, physical activity, smoking, body mass index (BMI), and some commonly used medications. Our review examines the emerging evidence for the association of modifiable factors on MBD and the influence of MBD on BC risk. There are clear associations between alcohol use and menopausal hormone therapy and increased MBD. Physical activity and the Mediterranean diet lower the risk of BC without significant effect on MBD. Although high BMI and smoking are known risk factors for BC, they have been found to decrease MBD. The influence of several other factors, including caffeine intake, nonhormonal medications, and vitamins, on MBD is unclear. We recommend counseling patients on these modifiable risk factors and using this knowledge to help with informed decision making for tailored BC prevention strategies.
Collapse
Affiliation(s)
- Sara P Lester
- Corresponding author: Sara P. Lester, MD, Division of General Internal Medicine, Mayo Clinic, 200 First St SW, Rochester, MN 55905, USA.
| | - Aparna S Kaur
- Division of General Internal Medicine, Mayo Clinic, Rochester, MN, USA
| | - Suneela Vegunta
- Division of Women’s Health Internal Medicine, Mayo Clinic, Scottsdale, AZ, USA
| |
Collapse
|
45
|
Xu C, Qi Y, Wang Y, Lou M, Pi J, Ma Y. ARF-Net: An Adaptive Receptive Field Network for breast mass segmentation in whole mammograms and ultrasound images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103178] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
46
|
Convolutional Neural Networks for Breast Density Classification: Performance and Explanation Insights. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app12010148] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
We propose and evaluate a procedure for the explainability of a breast density deep learning based classifier. A total of 1662 mammography exams labeled according to the BI-RADS categories of breast density was used. We built a residual Convolutional Neural Network, trained it and studied the responses of the model to input changes, such as different distributions of class labels in training and test sets and suitable image pre-processing. The aim was to identify the steps of the analysis with a relevant impact on the classifier performance and on the model explainability. We used the grad-CAM algorithm for CNN to produce saliency maps and computed the Spearman’s rank correlation between input images and saliency maps as a measure of explanation accuracy. We found that pre-processing is critical not only for accuracy, precision and recall of a model but also to have a reasonable explanation of the model itself. Our CNN reaches good performances compared to the state-of-art and it considers the dense pattern to make the classification. Saliency maps strongly correlate with the dense pattern. This work is a starting point towards the implementation of a standard framework to evaluate both CNN performances and the explainability of their predictions in medical image classification problems.
Collapse
|
47
|
Zhou Q, Zuley M, Guo Y, Yang L, Nair B, Vargo A, Ghannam S, Arefan D, Wu S. A machine and human reader study on AI diagnosis model safety under attacks of adversarial images. Nat Commun 2021; 12:7281. [PMID: 34907229 PMCID: PMC8671500 DOI: 10.1038/s41467-021-27577-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2020] [Accepted: 11/26/2021] [Indexed: 11/08/2022] Open
Abstract
While active efforts are advancing medical artificial intelligence (AI) model development and clinical translation, safety issues of the AI models emerge, but little research has been done. We perform a study to investigate the behaviors of an AI diagnosis model under adversarial images generated by Generative Adversarial Network (GAN) models and to evaluate the effects on human experts when visually identifying potential adversarial images. Our GAN model makes intentional modifications to the diagnosis-sensitive contents of mammogram images in deep learning-based computer-aided diagnosis (CAD) of breast cancer. In our experiments the adversarial samples fool the AI-CAD model to output a wrong diagnosis on 69.1% of the cases that are initially correctly classified by the AI-CAD model. Five breast imaging radiologists visually identify 29%-71% of the adversarial samples. Our study suggests an imperative need for continuing research on medical AI model's safety issues and for developing potential defensive solutions against adversarial attacks.
Collapse
Affiliation(s)
- Qianwei Zhou
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China
- Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou, 310023, China
| | - Margarita Zuley
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA
- Magee-Womens Hospital, University of Pittsburgh Medical Center, Pittsburgh, PA, 15213, USA
| | - Yuan Guo
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA
- Department of Radiology, Guangzhou First People's Hospital, School of Medicine, South China University of Technology, Guangzhou, 510180, China
| | - Lu Yang
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA
- Chongqing Key Laboratory of Translational Research for Cancer Metastasis and Individualized Treatment, Chongqing University Cancer Hospital, Chongqing, 400030, China
| | - Bronwyn Nair
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA
- Magee-Womens Hospital, University of Pittsburgh Medical Center, Pittsburgh, PA, 15213, USA
| | - Adrienne Vargo
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA
- Magee-Womens Hospital, University of Pittsburgh Medical Center, Pittsburgh, PA, 15213, USA
| | - Suzanne Ghannam
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA
- Magee-Womens Hospital, University of Pittsburgh Medical Center, Pittsburgh, PA, 15213, USA
| | - Dooman Arefan
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Shandong Wu
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA.
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA, 15213, USA.
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, 15213, USA.
- Intelligent Systems Program, University of Pittsburgh, Pittsburgh, PA, 15213, USA.
| |
Collapse
|
48
|
Li H, Mukundan R, Boyd S. Novel Texture Feature Descriptors Based on Multi-Fractal Analysis and LBP for Classifying Breast Density in Mammograms. J Imaging 2021; 7:jimaging7100205. [PMID: 34677291 PMCID: PMC8540831 DOI: 10.3390/jimaging7100205] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Revised: 09/26/2021] [Accepted: 10/01/2021] [Indexed: 11/16/2022] Open
Abstract
This paper investigates the usefulness of multi-fractal analysis and local binary patterns (LBP) as texture descriptors for classifying mammogram images into different breast density categories. Multi-fractal analysis is also used in the pre-processing step to segment the region of interest (ROI). We use four multi-fractal measures and the LBP method to extract texture features, and to compare their classification performance in experiments. In addition, a feature descriptor combining multi-fractal features and multi-resolution LBP (MLBP) features is proposed and evaluated in this study to improve classification accuracy. An autoencoder network and principal component analysis (PCA) are used for reducing feature redundancy in the classification model. A full field digital mammogram (FFDM) dataset, INBreast, which contains 409 mammogram images, is used in our experiment. BI-RADS density labels given by radiologists are used as the ground truth to evaluate the classification results using the proposed methods. Experimental results show that the proposed feature descriptor based on multi-fractal features and LBP result in higher classification accuracy than using individual texture feature sets.
Collapse
Affiliation(s)
- Haipeng Li
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8140, New Zealand;
- Correspondence:
| | - Ramakrishnan Mukundan
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8140, New Zealand;
| | - Shelley Boyd
- Canterbury Breastcare, St. George’s Medical Centre, Christchurch 8014, New Zealand;
| |
Collapse
|
49
|
Identification of the Vas Deferens in Laparoscopic Inguinal Hernia Repair Surgery Using the Convolutional Neural Network. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:5578089. [PMID: 34603649 PMCID: PMC8481069 DOI: 10.1155/2021/5578089] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 08/25/2021] [Accepted: 09/07/2021] [Indexed: 01/10/2023]
Abstract
Inguinal hernia repair is one of the most frequently conducted surgical procedures worldwide. Laparoscopic inguinal hernia repair is considered to be technically challenging. Artificial intelligence technology has made significant progress in medical imaging, but its application in laparoscopic surgery has not been widely carried out. Our aim is to detect vas deferens images in laparoscopic inguinal hernial repair using the convolutional neural network (CNN) and help surgeons to identify the vas deferens in time. We collected surgery videos from 35 patients with inguinal hernia who underwent laparoscopic hernia repair. We classified and labeled the images of the vas deferens and used the CNN to learn the image features. Totally, 2,600 images (26 patients) were labeled for training and validating the neural network and 1,200 images (6 patients) and 6 short video clips (3 patients) for testing. We adjusted the model parameters and tested the performance of the model under different confidence levels and IoU and used the chi-square to analyze the statistical difference in the video test dataset. We evaluated the model performance by calculating the true positive rate (TPR), true negative rate (TNR), accuracy (ACC), positive predictive value (PPV), and F1-score at different confidence levels of 0.1 to 0.9. In confidence level 0.4, the results were TPR 90.61%, TNR 98.67%, PPV 98.57%, ACC 94.61%, and F1 94.42%, respectively. The average precision (AP) was 92.38% at IoU 0.3. In the video test dataset, the average values of TPR and TNR were 90.11% and 95.76%, respectively, and there was no significant difference among the patients. The results suggest that the CNN can quickly and accurately identify and label vas deferens images in laparoscopic inguinal hernia repair.
Collapse
|
50
|
Maghsoudi OH, Gastounioti A, Scott C, Pantalone L, Wu FF, Cohen EA, Winham S, Conant EF, Vachon C, Kontos D. Deep-LIBRA: An artificial-intelligence method for robust quantification of breast density with independent validation in breast cancer risk assessment. Med Image Anal 2021; 73:102138. [PMID: 34274690 PMCID: PMC8453099 DOI: 10.1016/j.media.2021.102138] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 04/29/2021] [Accepted: 06/16/2021] [Indexed: 02/06/2023]
Abstract
Breast density is an important risk factor for breast cancer that also affects the specificity and sensitivity of screening mammography. Current federal legislation mandates reporting of breast density for all women undergoing breast cancer screening. Clinically, breast density is assessed visually using the American College of Radiology Breast Imaging Reporting And Data System (BI-RADS) scale. Here, we introduce an artificial intelligence (AI) method to estimate breast density from digital mammograms. Our method leverages deep learning using two convolutional neural network architectures to accurately segment the breast area. An AI algorithm combining superpixel generation and radiomic machine learning is then applied to differentiate dense from non-dense tissue regions within the breast, from which breast density is estimated. Our method was trained and validated on a multi-racial, multi-institutional dataset of 15,661 images (4,437 women), and then tested on an independent matched case-control dataset of 6368 digital mammograms (414 cases; 1178 controls) for both breast density estimation and case-control discrimination. On the independent dataset, breast percent density (PD) estimates from Deep-LIBRA and an expert reader were strongly correlated (Spearman correlation coefficient = 0.90). Moreover, in a model adjusted for age and BMI, Deep-LIBRA yielded a higher case-control discrimination performance (area under the ROC curve, AUC = 0.612 [95% confidence interval (CI): 0.584, 0.640]) compared to four other widely-used research and commercial breast density assessment methods (AUCs = 0.528 to 0.599). Our results suggest a strong agreement of breast density estimates between Deep-LIBRA and gold-standard assessment by an expert reader, as well as improved performance in breast cancer risk assessment over state-of-the-art open-source and commercial methods.
Collapse
Affiliation(s)
- Omid Haji Maghsoudi
- Department of Radiology, University of Pennsylvania, Philadelphia, 19104, PA, USA,
| | - Aimilia Gastounioti
- Department of Radiology, University of Pennsylvania, Philadelphia, 19104, PA, USA
| | - Christopher Scott
- Department of Health Sciences Research, Mayo Clinic, Rochester, 55905, MN, USA
| | - Lauren Pantalone
- Department of Radiology, University of Pennsylvania, Philadelphia, 19104, PA, USA
| | - Fang-Fang Wu
- Department of Health Sciences Research, Mayo Clinic, Rochester, 55905, MN, USA
| | - Eric A. Cohen
- Department of Radiology, University of Pennsylvania, Philadelphia, 19104, PA, USA
| | - Stacey Winham
- Department of Health Sciences Research, Mayo Clinic, Rochester, 55905, MN, USA
| | - Emily F. Conant
- Department of Radiology, University of Pennsylvania, Philadelphia, 19104, PA, USA
| | - Celine Vachon
- Department of Health Sciences Research, Mayo Clinic, Rochester, 55905, MN, USA
| | - Despina Kontos
- Department of Radiology, University of Pennsylvania, Philadelphia, 19104, PA, USA,
| |
Collapse
|