1
|
Thomas J, Malla L, Shibwabo B. Advances in analytical approaches for background parenchymal enhancement in predicting breast tumor response to neoadjuvant chemotherapy: A systematic review. PLoS One 2025; 20:e0317240. [PMID: 40053513 PMCID: PMC11888135 DOI: 10.1371/journal.pone.0317240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2024] [Accepted: 12/25/2024] [Indexed: 03/09/2025] Open
Abstract
BACKGROUND Breast cancer (BC) continues to pose a substantial global health concern, necessitating continuous advancements in therapeutic approaches. Neoadjuvant chemotherapy (NAC) has gained prominence as a key therapeutic strategy, and there is growing interest in the predictive utility of Background Parenchymal Enhancement (BPE) in evaluating the response of breast tumors to NAC. However, the analysis of BPE as a predictive biomarker, along with the techniques used to model BPE changes for accurate and timely predictions of treatment response presents several obstacles. This systematic review aims to thoroughly investigate recent advancements in the analytical methodologies for BPE analysis, and to evaluate their reliability and effectiveness in predicting breast tumor response to NAC, ultimately contributing to the development of personalized and effective therapeutic strategies. METHODS A comprehensive and structured literature search was conducted across key electronic databases, including Cochrane Database of Systematic Reviews, Google Scholar, PubMed, and IEEE Xplore covering articles published up to May 10, 2024. The inclusion criteria targeted studies focusing on breast cancer cohorts treated with NAC, involving both pre-treatment and at least one post-treatment breast dynamic contrast-enhanced Magnetic Resonance Imaging (DCE-MRI) scan, and analyzing BPE utility in predicting breast tumor response to NAC. Methodological quality assessment and data extraction were performed to synthesize findings and identify commonalities and differences among various BPE analytical approaches. RESULTS The search yielded a total of 882 records. After meticulous screening, 78 eligible records were identified, with 13 studies ultimately meeting the inclusion criteria for the systematic review. Analysis of the literature revealed a significant evolution in BPE analysis, from early studies focusing on single time-point BPE analysis to more recent studies adopting longitudinal BPE analysis. The review uncovered several gaps that compromise the accuracy and timeliness of existing longitudinal BPE analysis methods, such as missing data across multiple imaging time points, manual segmentation of the whole-breast region of interest, and over reliance on traditional statistical methods like logistic regression for modeling BPE and pathological complete response (pCR). CONCLUSION This review provides a thorough examination of current advancements in analytical approaches for BPE analysis in predicting breast tumor response to NAC. The shift towards longitudinal BPE analysis has highlighted significant gaps, suggesting the need for alternative analytical techniques, particularly in the realm of artificial intelligence (AI). Future longitudinal BPE research work should focus on standardization in longitudinal BPE measurement and analysis, through integration of deep learning-based approaches for automated tumor segmentation, and implementation of advanced AI technique that can better accommodate varied breast tumor responses, non-linear relationships and complex temporal dynamics in BPE datasets, while also handling missing data more effectively. Such integration could lead to more precise and timely predictions of breast tumor responses to NAC, thereby enhancing personalized and effective breast cancer treatment strategies.
Collapse
Affiliation(s)
- Julius Thomas
- School of Computing and Engineering Sciences, Strathmore University, Nairobi, Kenya
| | - Lucas Malla
- London School of Hygiene & Tropical Medicine, London, United Kingdom
| | - Benard Shibwabo
- School of Computing and Engineering Sciences, Strathmore University, Nairobi, Kenya
| |
Collapse
|
2
|
Duong KS, Rubner R, Siegel A, Adam R, Ha R, Maldjian T. Machine Learning Assessment of Background Parenchymal Enhancement in Breast Cancer and Clinical Applications: A Literature Review. Cancers (Basel) 2024; 16:3681. [PMID: 39518120 PMCID: PMC11545443 DOI: 10.3390/cancers16213681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2024] [Revised: 10/24/2024] [Accepted: 10/29/2024] [Indexed: 11/16/2024] Open
Abstract
Background Parenchymal Enhancement (BPE) on breast MRI holds promise as an imaging biomarker for breast cancer risk and prognosis. The ability to identify those at greatest risk can inform clinical decisions, promoting early diagnosis and potentially guiding strategies for prevention such as risk-reduction interventions with the use of selective estrogen receptor modulators and aromatase inhibitors. Currently, the standard method of assessing BPE is based on the Breast Imaging-Reporting and Data System (BI-RADS), which involves a radiologist's qualitative categorization of BPE as minimal, mild, moderate, or marked on contrast-enhanced MRI. This approach can be subjective and prone to inter/intra-observer variability, and compromises accuracy and reproducibility. In addition, this approach limits qualitative assessment to 4 categories. More recently developed methods using machine learning/artificial intelligence (ML/AI) techniques have the potential to quantify BPE more accurately and objectively. This paper will review the current machine learning/AI methods to determine BPE, and the clinical applications of BPE as an imaging biomarker for breast cancer risk prediction and prognosis.
Collapse
Affiliation(s)
- Katie S. Duong
- Department of Radiology, Montefiore Medical Center, Albert Einstein College of Medicine, Bronx, NY 10467, USA; (K.S.D.); (R.R.); (A.S.); (R.H.)
| | - Rhianna Rubner
- Department of Radiology, Montefiore Medical Center, Albert Einstein College of Medicine, Bronx, NY 10467, USA; (K.S.D.); (R.R.); (A.S.); (R.H.)
| | - Adam Siegel
- Department of Radiology, Montefiore Medical Center, Albert Einstein College of Medicine, Bronx, NY 10467, USA; (K.S.D.); (R.R.); (A.S.); (R.H.)
| | - Richard Adam
- New York Medical College, 40 Sunshine Cottage Road, Valhalla, NY 10595, USA;
| | - Richard Ha
- Department of Radiology, Montefiore Medical Center, Albert Einstein College of Medicine, Bronx, NY 10467, USA; (K.S.D.); (R.R.); (A.S.); (R.H.)
| | - Takouhie Maldjian
- Department of Radiology, Montefiore Medical Center, Albert Einstein College of Medicine, Bronx, NY 10467, USA; (K.S.D.); (R.R.); (A.S.); (R.H.)
| |
Collapse
|
3
|
Duan W, Wu Z, Zhu H, Zhu Z, Liu X, Shu Y, Zhu X, Wu J, Peng D. Deep learning modeling using mammography images for predicting estrogen receptor status in breast cancer. Am J Transl Res 2024; 16:2411-2422. [PMID: 39006260 PMCID: PMC11236640 DOI: 10.62347/puhr6185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 05/12/2024] [Indexed: 07/16/2024]
Abstract
BACKGROUND The estrogen receptor (ER) serves as a pivotal indicator for assessing endocrine therapy efficacy and breast cancer prognosis. Invasive biopsy is a conventional approach for appraising ER expression levels, but it bears disadvantages due to tumor heterogeneity. To address the issue, a deep learning model leveraging mammography images was developed in this study for accurate evaluation of ER status in patients with breast cancer. OBJECTIVES To predict the ER status in breast cancer patients with a newly developed deep learning model leveraging mammography images. MATERIALS AND METHODS Datasets comprising preoperative mammography images, ER expression levels, and clinical data spanning from October 2016 to October 2021 were retrospectively collected from 358 patients diagnosed with invasive ductal carcinoma. Following collection, these datasets were divided into a training dataset (n = 257) and a testing dataset (n = 101). Subsequently, a deep learning prediction model, referred to as IP-SE-DResNet model, was developed utilizing two deep residual networks along with the Squeeze-and-Excitation attention mechanism. This model was tailored to forecast the ER status in breast cancer patients utilizing mammography images from both craniocaudal view and mediolateral oblique view. Performance measurements including prediction accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curves (AUCs) were employed to assess the effectiveness of the model. RESULTS In the training dataset, the AUCs for the IP-SE-DResNet model utilizing mammography images from the craniocaudal view, mediolateral oblique view, and the combined images from both views, were 0.849 (95% CIs: 0.809-0.868), 0.858 (95% CIs: 0.813-0.872), and 0.895 (95% CIs: 0.866-0.913), respectively. Correspondingly, the AUCs for these three image categories in the testing dataset were 0.835 (95% CIs: 0.790-0.887), 0.746 (95% CIs: 0.793-0.889), and 0.886 (95% CIs: 0.809-0.934), respectively. A comprehensive comparison between performance measurements underscored a substantial enhancement achieved by the proposed IP-SE-DResNet model in contrast to a traditional radiomics model employing the naive Bayesian classifier. For the latter, the AUCs stood at only 0.614 (95% CIs: 0.594-0.638) in the training dataset and 0.613 (95% CIs: 0.587-0.654) in the testing dataset, both utilizing a combination of mammography images from the craniocaudal and mediolateral oblique views. CONCLUSIONS The proposed IP-SE-DResNet model presents a potent and non-invasive approach for predicting ER status in breast cancer patients, potentially enhancing the efficiency and diagnostic precision of radiologists.
Collapse
Affiliation(s)
- Wenfeng Duan
- Department of Radiology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchang, Jiangxi, China
| | - Zhiheng Wu
- School of Information Engineering, Nanchang UniversityNanchang, Jiangxi, China
| | - Huijun Zhu
- School of Information Engineering, Nanchang UniversityNanchang, Jiangxi, China
| | - Zhiyun Zhu
- Department of Cardiology, Jiangxi Provincial People’s HospitalNanchang, Jiangxi, China
| | - Xiang Liu
- Department of Radiology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchang, Jiangxi, China
| | - Yongqiang Shu
- Department of Radiology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchang, Jiangxi, China
| | - Xishun Zhu
- School of Advanced Manufacturing, Nanchang UniversityNanchang, Jiangxi, China
| | - Jianhua Wu
- School of Information Engineering, Nanchang UniversityNanchang, Jiangxi, China
| | - Dechang Peng
- Department of Radiology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang UniversityNanchang, Jiangxi, China
| |
Collapse
|
4
|
Ripaud E, Jailin C, Quintana GI, Milioni de Carvalho P, Sanchez de la Rosa R, Vancamberg L. Deep-learning model for background parenchymal enhancement classification in contrast-enhanced mammography. Phys Med Biol 2024; 69:115013. [PMID: 38657641 DOI: 10.1088/1361-6560/ad42ff] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 04/24/2024] [Indexed: 04/26/2024]
Abstract
Background.Breast background parenchymal enhancement (BPE) is correlated with the risk of breast cancer. BPE level is currently assessed by radiologists in contrast-enhanced mammography (CEM) using 4 classes: minimal, mild, moderate and marked, as described inbreast imaging reporting and data system(BI-RADS). However, BPE classification remains subject to intra- and inter-reader variability. Fully automated methods to assess BPE level have already been developed in breast contrast-enhanced MRI (CE-MRI) and have been shown to provide accurate and repeatable BPE level classification. However, to our knowledge, no BPE level classification tool is available in the literature for CEM.Materials and methods.A BPE level classification tool based on deep learning has been trained and optimized on 7012 CEM image pairs (low-energy and recombined images) and evaluated on a dataset of 1013 image pairs. The impact of image resolution, backbone architecture and loss function were analyzed, as well as the influence of lesion presence and type on BPE assessment. The evaluation of the model performance was conducted using different metrics including 4-class balanced accuracy and mean absolute error. The results of the optimized model for a binary classification: minimal/mild versus moderate/marked, were also investigated.Results.The optimized model achieved a 4-class balanced accuracy of 71.5% (95% CI: 71.2-71.9) with 98.8% of classification errors between adjacent classes. For binary classification, the accuracy reached 93.0%. A slight decrease in model accuracy is observed in the presence of lesions, but it is not statistically significant, suggesting that our model is robust to the presence of lesions in the image for a classification task. Visual assessment also confirms that the model is more affected by non-mass enhancements than by mass-like enhancements.Conclusion.The proposed BPE classification tool for CEM achieves similar results than what is published in the literature for CE-MRI.
Collapse
|
5
|
Lew CO, Harouni M, Kirksey ER, Kang EJ, Dong H, Gu H, Grimm LJ, Walsh R, Lowell DA, Mazurowski MA. A publicly available deep learning model and dataset for segmentation of breast, fibroglandular tissue, and vessels in breast MRI. Sci Rep 2024; 14:5383. [PMID: 38443410 PMCID: PMC10915139 DOI: 10.1038/s41598-024-54048-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 02/08/2024] [Indexed: 03/07/2024] Open
Abstract
Breast density, or the amount of fibroglandular tissue (FGT) relative to the overall breast volume, increases the risk of developing breast cancer. Although previous studies have utilized deep learning to assess breast density, the limited public availability of data and quantitative tools hinders the development of better assessment tools. Our objective was to (1) create and share a large dataset of pixel-wise annotations according to well-defined criteria, and (2) develop, evaluate, and share an automated segmentation method for breast, FGT, and blood vessels using convolutional neural networks. We used the Duke Breast Cancer MRI dataset to randomly select 100 MRI studies and manually annotated the breast, FGT, and blood vessels for each study. Model performance was evaluated using the Dice similarity coefficient (DSC). The model achieved DSC values of 0.92 for breast, 0.86 for FGT, and 0.65 for blood vessels on the test set. The correlation between our model's predicted breast density and the manually generated masks was 0.95. The correlation between the predicted breast density and qualitative radiologist assessment was 0.75. Our automated models can accurately segment breast, FGT, and blood vessels using pre-contrast breast MRI data. The data and the models were made publicly available.
Collapse
Affiliation(s)
- Christopher O Lew
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA.
| | - Majid Harouni
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Ella R Kirksey
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Elianne J Kang
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Haoyu Dong
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Hanxue Gu
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Lars J Grimm
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Ruth Walsh
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Dorothy A Lowell
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Maciej A Mazurowski
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| |
Collapse
|
6
|
Watt GP, Thakran S, Sung JS, Jochelson MS, Lobbes MBI, Weinstein SP, Bradbury AR, Buys SS, Morris EA, Apte A, Patel P, Woods M, Liang X, Pike MC, Kontos D, Bernstein JL. Association of Breast Cancer Odds with Background Parenchymal Enhancement Quantified Using a Fully Automated Method at MRI: The IMAGINE Study. Radiology 2023; 308:e230367. [PMID: 37750771 PMCID: PMC10546291 DOI: 10.1148/radiol.230367] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 06/13/2023] [Accepted: 06/16/2023] [Indexed: 09/27/2023]
Abstract
Background Background parenchymal enhancement (BPE) at breast MRI has been associated with increased breast cancer risk in several independent studies. However, variability of subjective BPE assessments have precluded its use in clinical practice. Purpose To examine the association between fully objective measures of BPE at MRI and odds of breast cancer. Materials and Methods This prospective case-control study included patients who underwent a bilateral breast MRI examination and were receiving care at one of three centers in the United States from November 2010 to July 2017. Breast volume, fibroglandular tissue (FGT) volume, and BPE were quantified using fully automated software. Fat volume was defined as breast volume minus FGT volume. BPE extent was defined as the proportion of FGT voxels with enhancement of 20% or more. Spearman rank correlation between quantitative BPE extent and Breast Imaging Reporting and Data System (BI-RADS) BPE categories assigned by an experienced board-certified breast radiologist was estimated. With use of multivariable logistic regression, breast cancer case-control status was regressed on tertiles (low, moderate, and high) of BPE, FGT volume, and fat volume, with adjustment for covariates. Results In total, 536 case participants with breast cancer (median age, 48 years [IQR, 43-55 years]) and 940 cancer-free controls (median age, 46 years [IQR, 38-55 years]) were included. BPE extent was positively associated with BI-RADS BPE (rs = 0.54; P < .001). Compared with low BPE extent (range, 2.9%-34.2%), high BPE extent (range, 50.7%-97.3%) was associated with increased odds of breast cancer (odds ratio [OR], 1.74 [95% CI: 1.23, 2.46]; P for trend = .002) in a multivariable model also including FGT volume (OR, 1.39 [95% CI: 0.97, 1.98]) and fat volume (OR, 1.46 [95% CI: 1.04, 2.06]). The association of high BPE extent with increased odds of breast cancer was similar for premenopausal and postmenopausal women (ORs, 1.75 and 1.83, respectively; interaction P = .73). Conclusion Objectively measured BPE at breast MRI is associated with increased breast cancer odds for both premenopausal and postmenopausal women. Clinical trial registration no. NCT02301767 © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Bokacheva in this issue.
Collapse
Affiliation(s)
- Gordon P. Watt
- From the Department of Epidemiology and Biostatistics (G.P.W., P.P., M.W., X.L., M.C.P., J.L.B.), Department of Radiology (J.S.S., M.S.J.), and Department of Medical Physics (A.A.), Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Department of Radiology, Perelman Center for Advanced Medicine at the University of Pennsylvania, Philadelphia, Pa (S.T., S.P.W., A.R.B., D.K.); Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, the Netherlands (M.B.I.L.); Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands (M.B.I.L.); GROW School for Oncology and Reproduction, Maastricht University, Maastricht, the Netherlands (M.B.I.L.); Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah (S.S.B.); and Department of Radiology, University of California Davis Medical Center, Davis, Calif (E.A.M.)
| | - Snekha Thakran
- From the Department of Epidemiology and Biostatistics (G.P.W., P.P., M.W., X.L., M.C.P., J.L.B.), Department of Radiology (J.S.S., M.S.J.), and Department of Medical Physics (A.A.), Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Department of Radiology, Perelman Center for Advanced Medicine at the University of Pennsylvania, Philadelphia, Pa (S.T., S.P.W., A.R.B., D.K.); Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, the Netherlands (M.B.I.L.); Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands (M.B.I.L.); GROW School for Oncology and Reproduction, Maastricht University, Maastricht, the Netherlands (M.B.I.L.); Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah (S.S.B.); and Department of Radiology, University of California Davis Medical Center, Davis, Calif (E.A.M.)
| | - Janice S. Sung
- From the Department of Epidemiology and Biostatistics (G.P.W., P.P., M.W., X.L., M.C.P., J.L.B.), Department of Radiology (J.S.S., M.S.J.), and Department of Medical Physics (A.A.), Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Department of Radiology, Perelman Center for Advanced Medicine at the University of Pennsylvania, Philadelphia, Pa (S.T., S.P.W., A.R.B., D.K.); Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, the Netherlands (M.B.I.L.); Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands (M.B.I.L.); GROW School for Oncology and Reproduction, Maastricht University, Maastricht, the Netherlands (M.B.I.L.); Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah (S.S.B.); and Department of Radiology, University of California Davis Medical Center, Davis, Calif (E.A.M.)
| | - Maxine S. Jochelson
- From the Department of Epidemiology and Biostatistics (G.P.W., P.P., M.W., X.L., M.C.P., J.L.B.), Department of Radiology (J.S.S., M.S.J.), and Department of Medical Physics (A.A.), Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Department of Radiology, Perelman Center for Advanced Medicine at the University of Pennsylvania, Philadelphia, Pa (S.T., S.P.W., A.R.B., D.K.); Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, the Netherlands (M.B.I.L.); Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands (M.B.I.L.); GROW School for Oncology and Reproduction, Maastricht University, Maastricht, the Netherlands (M.B.I.L.); Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah (S.S.B.); and Department of Radiology, University of California Davis Medical Center, Davis, Calif (E.A.M.)
| | - Marc B. I. Lobbes
- From the Department of Epidemiology and Biostatistics (G.P.W., P.P., M.W., X.L., M.C.P., J.L.B.), Department of Radiology (J.S.S., M.S.J.), and Department of Medical Physics (A.A.), Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Department of Radiology, Perelman Center for Advanced Medicine at the University of Pennsylvania, Philadelphia, Pa (S.T., S.P.W., A.R.B., D.K.); Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, the Netherlands (M.B.I.L.); Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands (M.B.I.L.); GROW School for Oncology and Reproduction, Maastricht University, Maastricht, the Netherlands (M.B.I.L.); Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah (S.S.B.); and Department of Radiology, University of California Davis Medical Center, Davis, Calif (E.A.M.)
| | - Susan P. Weinstein
- From the Department of Epidemiology and Biostatistics (G.P.W., P.P., M.W., X.L., M.C.P., J.L.B.), Department of Radiology (J.S.S., M.S.J.), and Department of Medical Physics (A.A.), Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Department of Radiology, Perelman Center for Advanced Medicine at the University of Pennsylvania, Philadelphia, Pa (S.T., S.P.W., A.R.B., D.K.); Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, the Netherlands (M.B.I.L.); Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands (M.B.I.L.); GROW School for Oncology and Reproduction, Maastricht University, Maastricht, the Netherlands (M.B.I.L.); Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah (S.S.B.); and Department of Radiology, University of California Davis Medical Center, Davis, Calif (E.A.M.)
| | - Angela R. Bradbury
- From the Department of Epidemiology and Biostatistics (G.P.W., P.P., M.W., X.L., M.C.P., J.L.B.), Department of Radiology (J.S.S., M.S.J.), and Department of Medical Physics (A.A.), Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Department of Radiology, Perelman Center for Advanced Medicine at the University of Pennsylvania, Philadelphia, Pa (S.T., S.P.W., A.R.B., D.K.); Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, the Netherlands (M.B.I.L.); Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands (M.B.I.L.); GROW School for Oncology and Reproduction, Maastricht University, Maastricht, the Netherlands (M.B.I.L.); Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah (S.S.B.); and Department of Radiology, University of California Davis Medical Center, Davis, Calif (E.A.M.)
| | - Saundra S. Buys
- From the Department of Epidemiology and Biostatistics (G.P.W., P.P., M.W., X.L., M.C.P., J.L.B.), Department of Radiology (J.S.S., M.S.J.), and Department of Medical Physics (A.A.), Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Department of Radiology, Perelman Center for Advanced Medicine at the University of Pennsylvania, Philadelphia, Pa (S.T., S.P.W., A.R.B., D.K.); Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, the Netherlands (M.B.I.L.); Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands (M.B.I.L.); GROW School for Oncology and Reproduction, Maastricht University, Maastricht, the Netherlands (M.B.I.L.); Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah (S.S.B.); and Department of Radiology, University of California Davis Medical Center, Davis, Calif (E.A.M.)
| | - Elizabeth A. Morris
- From the Department of Epidemiology and Biostatistics (G.P.W., P.P., M.W., X.L., M.C.P., J.L.B.), Department of Radiology (J.S.S., M.S.J.), and Department of Medical Physics (A.A.), Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Department of Radiology, Perelman Center for Advanced Medicine at the University of Pennsylvania, Philadelphia, Pa (S.T., S.P.W., A.R.B., D.K.); Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, the Netherlands (M.B.I.L.); Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands (M.B.I.L.); GROW School for Oncology and Reproduction, Maastricht University, Maastricht, the Netherlands (M.B.I.L.); Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah (S.S.B.); and Department of Radiology, University of California Davis Medical Center, Davis, Calif (E.A.M.)
| | - Aditya Apte
- From the Department of Epidemiology and Biostatistics (G.P.W., P.P., M.W., X.L., M.C.P., J.L.B.), Department of Radiology (J.S.S., M.S.J.), and Department of Medical Physics (A.A.), Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Department of Radiology, Perelman Center for Advanced Medicine at the University of Pennsylvania, Philadelphia, Pa (S.T., S.P.W., A.R.B., D.K.); Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, the Netherlands (M.B.I.L.); Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands (M.B.I.L.); GROW School for Oncology and Reproduction, Maastricht University, Maastricht, the Netherlands (M.B.I.L.); Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah (S.S.B.); and Department of Radiology, University of California Davis Medical Center, Davis, Calif (E.A.M.)
| | - Prusha Patel
- From the Department of Epidemiology and Biostatistics (G.P.W., P.P., M.W., X.L., M.C.P., J.L.B.), Department of Radiology (J.S.S., M.S.J.), and Department of Medical Physics (A.A.), Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Department of Radiology, Perelman Center for Advanced Medicine at the University of Pennsylvania, Philadelphia, Pa (S.T., S.P.W., A.R.B., D.K.); Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, the Netherlands (M.B.I.L.); Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands (M.B.I.L.); GROW School for Oncology and Reproduction, Maastricht University, Maastricht, the Netherlands (M.B.I.L.); Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah (S.S.B.); and Department of Radiology, University of California Davis Medical Center, Davis, Calif (E.A.M.)
| | - Meghan Woods
- From the Department of Epidemiology and Biostatistics (G.P.W., P.P., M.W., X.L., M.C.P., J.L.B.), Department of Radiology (J.S.S., M.S.J.), and Department of Medical Physics (A.A.), Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Department of Radiology, Perelman Center for Advanced Medicine at the University of Pennsylvania, Philadelphia, Pa (S.T., S.P.W., A.R.B., D.K.); Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, the Netherlands (M.B.I.L.); Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands (M.B.I.L.); GROW School for Oncology and Reproduction, Maastricht University, Maastricht, the Netherlands (M.B.I.L.); Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah (S.S.B.); and Department of Radiology, University of California Davis Medical Center, Davis, Calif (E.A.M.)
| | - Xiaolin Liang
- From the Department of Epidemiology and Biostatistics (G.P.W., P.P., M.W., X.L., M.C.P., J.L.B.), Department of Radiology (J.S.S., M.S.J.), and Department of Medical Physics (A.A.), Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Department of Radiology, Perelman Center for Advanced Medicine at the University of Pennsylvania, Philadelphia, Pa (S.T., S.P.W., A.R.B., D.K.); Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, the Netherlands (M.B.I.L.); Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands (M.B.I.L.); GROW School for Oncology and Reproduction, Maastricht University, Maastricht, the Netherlands (M.B.I.L.); Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah (S.S.B.); and Department of Radiology, University of California Davis Medical Center, Davis, Calif (E.A.M.)
| | - Malcolm C. Pike
- From the Department of Epidemiology and Biostatistics (G.P.W., P.P., M.W., X.L., M.C.P., J.L.B.), Department of Radiology (J.S.S., M.S.J.), and Department of Medical Physics (A.A.), Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Department of Radiology, Perelman Center for Advanced Medicine at the University of Pennsylvania, Philadelphia, Pa (S.T., S.P.W., A.R.B., D.K.); Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, the Netherlands (M.B.I.L.); Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands (M.B.I.L.); GROW School for Oncology and Reproduction, Maastricht University, Maastricht, the Netherlands (M.B.I.L.); Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah (S.S.B.); and Department of Radiology, University of California Davis Medical Center, Davis, Calif (E.A.M.)
| | - Despina Kontos
- From the Department of Epidemiology and Biostatistics (G.P.W., P.P., M.W., X.L., M.C.P., J.L.B.), Department of Radiology (J.S.S., M.S.J.), and Department of Medical Physics (A.A.), Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Department of Radiology, Perelman Center for Advanced Medicine at the University of Pennsylvania, Philadelphia, Pa (S.T., S.P.W., A.R.B., D.K.); Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, the Netherlands (M.B.I.L.); Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands (M.B.I.L.); GROW School for Oncology and Reproduction, Maastricht University, Maastricht, the Netherlands (M.B.I.L.); Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah (S.S.B.); and Department of Radiology, University of California Davis Medical Center, Davis, Calif (E.A.M.)
| | - Jonine L. Bernstein
- From the Department of Epidemiology and Biostatistics (G.P.W., P.P., M.W., X.L., M.C.P., J.L.B.), Department of Radiology (J.S.S., M.S.J.), and Department of Medical Physics (A.A.), Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065; Department of Radiology, Perelman Center for Advanced Medicine at the University of Pennsylvania, Philadelphia, Pa (S.T., S.P.W., A.R.B., D.K.); Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, the Netherlands (M.B.I.L.); Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands (M.B.I.L.); GROW School for Oncology and Reproduction, Maastricht University, Maastricht, the Netherlands (M.B.I.L.); Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah (S.S.B.); and Department of Radiology, University of California Davis Medical Center, Davis, Calif (E.A.M.)
| |
Collapse
|
7
|
Müller-Franzes G, Müller-Franzes F, Huck L, Raaff V, Kemmer E, Khader F, Arasteh ST, Lemainque T, Kather JN, Nebelung S, Kuhl C, Truhn D. Fibroglandular tissue segmentation in breast MRI using vision transformers: a multi-institutional evaluation. Sci Rep 2023; 13:14207. [PMID: 37648728 PMCID: PMC10468506 DOI: 10.1038/s41598-023-41331-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 08/24/2023] [Indexed: 09/01/2023] Open
Abstract
Accurate and automatic segmentation of fibroglandular tissue in breast MRI screening is essential for the quantification of breast density and background parenchymal enhancement. In this retrospective study, we developed and evaluated a transformer-based neural network for breast segmentation (TraBS) in multi-institutional MRI data, and compared its performance to the well established convolutional neural network nnUNet. TraBS and nnUNet were trained and tested on 200 internal and 40 external breast MRI examinations using manual segmentations generated by experienced human readers. Segmentation performance was assessed in terms of the Dice score and the average symmetric surface distance. The Dice score for nnUNet was lower than for TraBS on the internal testset (0.909 ± 0.069 versus 0.916 ± 0.067, P < 0.001) and on the external testset (0.824 ± 0.144 versus 0.864 ± 0.081, P = 0.004). Moreover, the average symmetric surface distance was higher (= worse) for nnUNet than for TraBS on the internal (0.657 ± 2.856 versus 0.548 ± 2.195, P = 0.001) and on the external testset (0.727 ± 0.620 versus 0.584 ± 0.413, P = 0.03). Our study demonstrates that transformer-based networks improve the quality of fibroglandular tissue segmentation in breast MRI compared to convolutional-based models like nnUNet. These findings might help to enhance the accuracy of breast density and parenchymal enhancement quantification in breast MRI screening.
Collapse
Affiliation(s)
- Gustav Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Fritz Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Luisa Huck
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Vanessa Raaff
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Eva Kemmer
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Firas Khader
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Soroosh Tayebi Arasteh
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Teresa Lemainque
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Technical University, Dresden, Germany
- Department of Medicine III, University Hospital RWTH, Aachen, Germany
| | - Sven Nebelung
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Christiane Kuhl
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany.
| |
Collapse
|
8
|
Anaby D, Shavin D, Zimmerman-Moreno G, Nissan N, Friedman E, Sklair-Levy M. 'Earlier than Early' Detection of Breast Cancer in Israeli BRCA Mutation Carriers Applying AI-Based Analysis to Consecutive MRI Scans. Cancers (Basel) 2023; 15:3120. [PMID: 37370730 DOI: 10.3390/cancers15123120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 05/30/2023] [Accepted: 06/02/2023] [Indexed: 06/29/2023] Open
Abstract
Female BRCA1/BRCA2 (=BRCA) pathogenic variants (PVs) carriers are at a substantially higher risk for developing breast cancer (BC) compared with the average risk population. Detection of BC at an early stage significantly improves prognosis. To facilitate early BC detection, a surveillance scheme is offered to BRCA PV carriers from age 25-30 years that includes annual MRI based breast imaging. Indeed, adherence to the recommended scheme has been shown to be associated with earlier disease stages at BC diagnosis, more in-situ pathology, smaller tumors, and less axillary involvement. While MRI is the most sensitive modality for BC detection in BRCA PV carriers, there are a significant number of overlooked or misinterpreted radiological lesions (mostly enhancing foci), leading to a delayed BC diagnosis at a more advanced stage. In this study we developed an artificial intelligence (AI)-network, aimed at a more accurate classification of enhancing foci, in MRIs of BRCA PV carriers, thus reducing false-negative interpretations. Retrospectively identified foci in prior MRIs that were either diagnosed as BC or benign/normal in a subsequent MRI were manually segmented and served as input for a convolutional network architecture. The model was successful in classification of 65% of the cancerous foci, most of them triple-negative BC. If validated, applying this scheme routinely may facilitate 'earlier than early' BC diagnosis in BRCA PV carriers.
Collapse
Affiliation(s)
- Debbie Anaby
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat Gan 52621, Israel
- Sackler Faculty of Medicine, Tel-Aviv University, Tel-Aviv 6910201, Israel
| | - David Shavin
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat Gan 52621, Israel
| | | | - Noam Nissan
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat Gan 52621, Israel
- Sackler Faculty of Medicine, Tel-Aviv University, Tel-Aviv 6910201, Israel
| | - Eitan Friedman
- Sackler Faculty of Medicine, Tel-Aviv University, Tel-Aviv 6910201, Israel
- Meirav High Risk Center, Sheba Medical Center, Ramat Gan 52621, Israel
| | - Miri Sklair-Levy
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat Gan 52621, Israel
- Sackler Faculty of Medicine, Tel-Aviv University, Tel-Aviv 6910201, Israel
- Meirav High Risk Center, Sheba Medical Center, Ramat Gan 52621, Israel
| |
Collapse
|
9
|
Cè M, Caloro E, Pellegrino ME, Basile M, Sorce A, Fazzini D, Oliva G, Cellina M. Artificial intelligence in breast cancer imaging: risk stratification, lesion detection and classification, treatment planning and prognosis-a narrative review. EXPLORATION OF TARGETED ANTI-TUMOR THERAPY 2022; 3:795-816. [PMID: 36654817 PMCID: PMC9834285 DOI: 10.37349/etat.2022.00113] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 09/28/2022] [Indexed: 12/28/2022] Open
Abstract
The advent of artificial intelligence (AI) represents a real game changer in today's landscape of breast cancer imaging. Several innovative AI-based tools have been developed and validated in recent years that promise to accelerate the goal of real patient-tailored management. Numerous studies confirm that proper integration of AI into existing clinical workflows could bring significant benefits to women, radiologists, and healthcare systems. The AI-based approach has proved particularly useful for developing new risk prediction models that integrate multi-data streams for planning individualized screening protocols. Furthermore, AI models could help radiologists in the pre-screening and lesion detection phase, increasing diagnostic accuracy, while reducing workload and complications related to overdiagnosis. Radiomics and radiogenomics approaches could extrapolate the so-called imaging signature of the tumor to plan a targeted treatment. The main challenges to the development of AI tools are the huge amounts of high-quality data required to train and validate these models and the need for a multidisciplinary team with solid machine-learning skills. The purpose of this article is to present a summary of the most important AI applications in breast cancer imaging, analyzing possible challenges and new perspectives related to the widespread adoption of these new tools.
Collapse
Affiliation(s)
- Maurizio Cè
- Postgraduate School in Diagnostic and Interventional Radiology, University of Milan, 20122 Milan, Italy
| | - Elena Caloro
- Postgraduate School in Diagnostic and Interventional Radiology, University of Milan, 20122 Milan, Italy
| | - Maria E. Pellegrino
- Postgraduate School in Diagnostic and Interventional Radiology, University of Milan, 20122 Milan, Italy
| | - Mariachiara Basile
- Postgraduate School in Diagnostic and Interventional Radiology, University of Milan, 20122 Milan, Italy
| | - Adriana Sorce
- Postgraduate School in Diagnostic and Interventional Radiology, University of Milan, 20122 Milan, Italy
| | | | - Giancarlo Oliva
- Department of Radiology, ASST Fatebenefratelli Sacco, 20121 Milan, Italy
| | - Michaela Cellina
- Department of Radiology, ASST Fatebenefratelli Sacco, 20121 Milan, Italy
| |
Collapse
|
10
|
Zhang X, Liu M, Ren W, Sun J, Wang K, Xi X, Zhang G. Predicting of axillary lymph node metastasis in invasive breast cancer using multiparametric MRI dataset based on CNN model. Front Oncol 2022; 12:1069733. [PMID: 36561533 PMCID: PMC9763602 DOI: 10.3389/fonc.2022.1069733] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 11/15/2022] [Indexed: 12/12/2022] Open
Abstract
Purpose To develop a multiparametric MRI model for predicting axillary lymph node metastasis in invasive breast cancer. Methods Clinical data and T2WI, DWI, and DCE-MRI images of 252 patients with invasive breast cancer were retrospectively analyzed and divided into the axillary lymph node metastasis (ALNM) group and non-ALNM group using biopsy results as a reference standard. The regions of interest (ROI) in T2WI, DWI, and DCE-MRI images were segmented using MATLAB software, and the ROI was unified into 224 × 224 sizes, followed by image normalization as input to T2WI, DWI, and DCE-MRI models, all of which were based on ResNet 50 networks. The idea of a weighted voting method in ensemble learning was employed, and then T2WI, DWI, and DCE-MRI models were used as the base models to construct a multiparametric MRI model. The entire dataset was randomly divided into training sets and testing sets (the training set 202 cases, including 78 ALNM, 124 non-ALNM; the testing set 50 cases, including 20 ALNM, 30 non-ALNM). Accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of models were calculated. The receiver operating characteristic (ROC) curve and area under the curve (AUC) were used to evaluate the diagnostic performance of each model for axillary lymph node metastasis, and the DeLong test was performed, P< 0.05 statistically significant. Results For the assessment of axillary lymph node status in invasive breast cancer on the test set, multiparametric MRI models yielded an AUC of 0.913 (95% CI, 0.799-0.974); T2WI-based model yielded an AUC of 0.908 (95% CI, 0.792-0.971); DWI-based model achieved an AUC of 0.702 (95% CI, 0.556-0.823); and the AUC of the DCE-MRI-based model was 0.572 (95% CI, 0.424-0.711). The improvement in the diagnostic performance of the multiparametric MRI model compared with the DWI and DCE-MRI-based models were significant (P< 0.01 for both). However, the increase was not meaningful compared with the T2WI-based model (P = 0.917). Conclusion Multiparametric MRI image analysis based on an ensemble CNN model with deep learning is of practical application and extension for preoperative prediction of axillary lymph node metastasis in invasive breast cancer.
Collapse
Affiliation(s)
- Xiaodong Zhang
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University, Jinan, China,Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
| | - Menghan Liu
- Department of Health Management, The First Affiliated Hospital of Shandong First Medical University, Jinan, China
| | - Wanqing Ren
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University, Jinan, China,Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
| | - Jingxiang Sun
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University, Jinan, China,Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
| | - Kesong Wang
- School of Computer Science and Technology, Shandong Jianzhu University, Jinan, China
| | - Xiaoming Xi
- School of Computer Science and Technology, Shandong Jianzhu University, Jinan, China
| | - Guang Zhang
- Department of Health Management, The First Affiliated Hospital of Shandong First Medical University, Jinan, China,*Correspondence: Guang Zhang,
| |
Collapse
|
11
|
Eskreis-Winkler S, Sutton EJ, D’Alessio D, Gallagher K, Saphier N, Stember J, Martinez DF, Morris EA, Pinker K. Breast MRI Background Parenchymal Enhancement Categorization Using Deep Learning: Outperforming the Radiologist. J Magn Reson Imaging 2022; 56:1068-1076. [PMID: 35167152 PMCID: PMC9376189 DOI: 10.1002/jmri.28111] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 02/01/2022] [Accepted: 02/02/2022] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Background parenchymal enhancement (BPE) is assessed on breast MRI reports as mandated by the Breast Imaging Reporting and Data System (BI-RADS) but is prone to inter and intrareader variation. Semiautomated and fully automated BPE assessment tools have been developed but none has surpassed radiologist BPE designations. PURPOSE To develop a deep learning model for automated BPE classification and to compare its performance with current standard-of-care radiology report BPE designations. STUDY TYPE Retrospective. POPULATION Consecutive high-risk patients (i.e. >20% lifetime risk of breast cancer) who underwent contrast-enhanced screening breast MRI from October 2013 to January 2019. The study included 5224 breast MRIs, divided into 3998 training, 444 validation, and 782 testing exams. On radiology reports, 1286 exams were categorized as high BPE (i.e., marked or moderate) and 3938 as low BPE (i.e., mild or minimal). FIELD STRENGTH/SEQUENCE A 1.5 T or 3 T system; one precontrast and three postcontrast phases of fat-saturated T1-weighted dynamic contrast-enhanced imaging. ASSESSMENT Breast MRIs were used to develop two deep learning models (Slab artificial intelligence (AI); maximum intensity projection [MIP] AI) for BPE categorization using radiology report BPE labels. Models were tested on a heldout test sets using radiology report BPE and three-reader averaged consensus as the reference standards. STATISTICAL TESTS Model performance was assessed using receiver operating characteristic curve analysis. Associations between high BPE and BI-RADS assessments were evaluated using McNemar's chi-square test (α* = 0.025). RESULTS The Slab AI model significantly outperformed the MIP AI model across the full test set (area under the curve of 0.84 vs. 0.79) using the radiology report reference standard. Using three-reader consensus BPE labels reference standard, our AI model significantly outperformed radiology report BPE labels. Finally, the AI model was significantly more likely than the radiologist to assign "high BPE" to suspicious breast MRIs and significantly less likely than the radiologist to assign "high BPE" to negative breast MRIs. DATA CONCLUSION Fully automated BPE assessments for breast MRIs could be more accurate than BPE assessments from radiology reports. LEVEL OF EVIDENCE 4 TECHNICAL EFFICACY STAGE: 3.
Collapse
Affiliation(s)
- Sarah Eskreis-Winkler
- Department of Radiology, Breast Imaging Service, Memorial Sloan Kettering Cancer Center, 300 E 66th Street, New York, NY 10065, USA
| | - Elizabeth J. Sutton
- Department of Radiology, Breast Imaging Service, Memorial Sloan Kettering Cancer Center, 300 E 66th Street, New York, NY 10065, USA
| | - Donna D’Alessio
- Department of Radiology, Breast Imaging Service, Memorial Sloan Kettering Cancer Center, 300 E 66th Street, New York, NY 10065, USA
| | - Katherine Gallagher
- Department of Radiology, Breast Imaging Service, Memorial Sloan Kettering Cancer Center, 300 E 66th Street, New York, NY 10065, USA
| | - Nicole Saphier
- Department of Radiology, Breast Imaging Service, Memorial Sloan Kettering Cancer Center, 300 E 66th Street, New York, NY 10065, USA
| | - Joseph Stember
- Department of Radiology, Neuroradiology Service, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY 10065, USA
| | - Danny F Martinez
- Department of Radiology, Breast Imaging Service, Memorial Sloan Kettering Cancer Center, 300 E 66th Street, New York, NY 10065, USA
| | | | - Katja Pinker
- Department of Radiology, Breast Imaging Service, Memorial Sloan Kettering Cancer Center, 300 E 66th Street, New York, NY 10065, USA
| |
Collapse
|
12
|
Yamamuro M, Asai Y, Hashimoto N, Yasuda N, Kimura H, Yamada T, Nemoto M, Kimura Y, Handa H, Yoshida H, Abe K, Tada M, Habe H, Nagaoka T, Nin S, Ishii K, Kondo Y. Utility of U-Net for the objective segmentation of the fibroglandular tissue region on clinical digital mammograms. Biomed Phys Eng Express 2022; 8. [PMID: 35728581 DOI: 10.1088/2057-1976/ac7ada] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 06/21/2022] [Indexed: 11/11/2022]
Abstract
This study investigates the equivalence or compatibility between U-Net and visual segmentations of fibroglandular tissue regions by mammography experts for calculating the breast density and mean glandular dose (MGD). A total of 703 mediolateral oblique-view mammograms were used for segmentation. Two region types were set as the ground truth (determined visually): (1) one type included only the region where fibroglandular tissue was identifiable (called the 'dense region'); (2) the other type included the region where the fibroglandular tissue may have existed in the past, provided that apparent adipose-only parts, such as the retromammary space, are excluded (the 'diffuse region'). U-Net was trained to segment the fibroglandular tissue region with an adaptive moment estimation optimiser, five-fold cross-validated with 400 training and 100 validation mammograms, and tested with 203 mammograms. The breast density and MGD were calculated using the van Engeland and Dance formulas, respectively, and compared between U-Net and the ground truth with the Dice similarity coefficient and Bland-Altman analysis. Dice similarity coefficients between U-Net and the ground truth were 0.895 and 0.939 for the dense and diffuse regions, respectively. In the Bland-Altman analysis, no proportional or fixed errors were discovered in either the dense or diffuse region for breast density, whereas a slight proportional error was discovered in both regions for the MGD (the slopes of the regression lines were -0.0299 and -0.0443 for the dense and diffuse regions, respectively). Consequently, the U-Net and ground truth were deemed equivalent (interchangeable) for breast density and compatible (interchangeable following four simple arithmetic operations) for MGD. U-Net-based segmentation of the fibroglandular tissue region was satisfactory for both regions, providing reliable segmentation for breast density and MGD calculations. U-Net will be useful in developing a reliable individualised screening-mammography programme, instead of relying on the visual judgement of mammography experts.
Collapse
Affiliation(s)
- Mika Yamamuro
- Radiology Center, Kindai University Hospital, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan.,Graduate School of Health Sciences, Niigata University, 2-746, Asahimachidori, Chuouku, Niigata 951-8518, Japan
| | - Yoshiyuki Asai
- Radiology Center, Kindai University Hospital, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Naomi Hashimoto
- Radiology Center, Kindai University Hospital, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Nao Yasuda
- Radiology Center, Kindai University Hospital, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Hiorto Kimura
- Radiology Center, Kindai University Hospital, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Takahiro Yamada
- Division of Positron Emission Tomography Institute of Advanced Clinical Medicine, Kindai University, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Mitsutaka Nemoto
- Department of Computational Systems Biology, Kindai University Faculty of Biology-Oriented Science and Technology, 930, Nishimitani, Kinokawa, Wakayama 649-6433, Japan
| | - Yuichi Kimura
- Department of Computational Systems Biology, Kindai University Faculty of Biology-Oriented Science and Technology, 930, Nishimitani, Kinokawa, Wakayama 649-6433, Japan
| | - Hisashi Handa
- Department of Informatics, Kindai University Faculty of Science and Engineering, 3-4-1, Kowakae, Higashi-osaka, Osaka 577-8502, Japan
| | - Hisashi Yoshida
- Department of Computational Systems Biology, Kindai University Faculty of Biology-Oriented Science and Technology, 930, Nishimitani, Kinokawa, Wakayama 649-6433, Japan
| | - Koji Abe
- Department of Informatics, Kindai University Faculty of Science and Engineering, 3-4-1, Kowakae, Higashi-osaka, Osaka 577-8502, Japan
| | - Masahiro Tada
- Department of Informatics, Kindai University Faculty of Science and Engineering, 3-4-1, Kowakae, Higashi-osaka, Osaka 577-8502, Japan
| | - Hitoshi Habe
- Department of Informatics, Kindai University Faculty of Science and Engineering, 3-4-1, Kowakae, Higashi-osaka, Osaka 577-8502, Japan
| | - Takashi Nagaoka
- Department of Computational Systems Biology, Kindai University Faculty of Biology-Oriented Science and Technology, 930, Nishimitani, Kinokawa, Wakayama 649-6433, Japan
| | - Seiun Nin
- Department of Radiology, Kindai University Faculty of Medicine, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Kazunari Ishii
- Department of Radiology, Kindai University Faculty of Medicine, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Yohan Kondo
- Graduate School of Health Sciences, Niigata University, 2-746, Asahimachidori, Chuouku, Niigata 951-8518, Japan
| |
Collapse
|
13
|
Bhowmik A, Eskreis-Winkler S. Deep learning in breast imaging. BJR Open 2022; 4:20210060. [PMID: 36105427 PMCID: PMC9459862 DOI: 10.1259/bjro.20210060] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 04/04/2022] [Accepted: 04/21/2022] [Indexed: 11/22/2022] Open
Abstract
Millions of breast imaging exams are performed each year in an effort to reduce the morbidity and mortality of breast cancer. Breast imaging exams are performed for cancer screening, diagnostic work-up of suspicious findings, evaluating extent of disease in recently diagnosed breast cancer patients, and determining treatment response. Yet, the interpretation of breast imaging can be subjective, tedious, time-consuming, and prone to human error. Retrospective and small reader studies suggest that deep learning (DL) has great potential to perform medical imaging tasks at or above human-level performance, and may be used to automate aspects of the breast cancer screening process, improve cancer detection rates, decrease unnecessary callbacks and biopsies, optimize patient risk assessment, and open up new possibilities for disease prognostication. Prospective trials are urgently needed to validate these proposed tools, paving the way for real-world clinical use. New regulatory frameworks must also be developed to address the unique ethical, medicolegal, and quality control issues that DL algorithms present. In this article, we review the basics of DL, describe recent DL breast imaging applications including cancer detection and risk prediction, and discuss the challenges and future directions of artificial intelligence-based systems in the field of breast cancer.
Collapse
Affiliation(s)
- Arka Bhowmik
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Sarah Eskreis-Winkler
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| |
Collapse
|
14
|
Bahl M. Updates in Artificial Intelligence for Breast Imaging. Semin Roentgenol 2022; 57:160-167. [PMID: 35523530 PMCID: PMC9077006 DOI: 10.1053/j.ro.2021.12.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 12/23/2021] [Indexed: 12/19/2022]
Abstract
Artificial intelligence (AI) for breast imaging has rapidly moved from the experimental to implementation phase. As of this writing, Food and Drug Administration (FDA)-approved mammographic applications are available for triage, lesion detection and classification, and breast density assessment. For sonography and MRI, FDA-approved applications are available for lesion classification. Numerous other interpretive and noninterpretive AI applications are in the development phase. This article reviews AI applications for mammography, sonography, and MRI that are currently available for clinical use. In addition, clinical implementation and the future of AI for breast imaging are discussed.
Collapse
Affiliation(s)
- Manisha Bahl
- Massachusetts General Hospital, Department of Radiology, Boston, MA.
| |
Collapse
|
15
|
Grøvik E, Hoff SR. Editorial for "Breast MRI Background Parenchymal Enhancement Categorization Using Deep Learning: Outperforming the Radiologist". J Magn Reson Imaging 2022; 56:1077-1078. [PMID: 35343010 DOI: 10.1002/jmri.28183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 03/15/2022] [Indexed: 11/10/2022] Open
Affiliation(s)
- Endre Grøvik
- Department of Radiology, Ålesund Hospital, Møre og Romsdal Hospital Trust, Alesund, Norway.,Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| | - Solveig Roth Hoff
- Department of Radiology, Ålesund Hospital, Møre og Romsdal Hospital Trust, Alesund, Norway.,Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
16
|
Frankhouser DE, Dietze E, Mahabal A, Seewaldt VL. Vascularity and Dynamic Contrast-Enhanced Breast Magnetic Resonance Imaging. FRONTIERS IN RADIOLOGY 2021; 1:735567. [PMID: 37492179 PMCID: PMC10364989 DOI: 10.3389/fradi.2021.735567] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Accepted: 11/11/2021] [Indexed: 07/27/2023]
Abstract
Angiogenesis is a key step in the initiation and progression of an invasive breast cancer. High microvessel density by morphological characterization predicts metastasis and poor survival in women with invasive breast cancers. However, morphologic characterization is subject to variability and only can evaluate a limited portion of an invasive breast cancer. Consequently, breast Magnetic Resonance Imaging (MRI) is currently being evaluated to assess vascularity. Recently, through the new field of radiomics, dynamic contrast enhanced (DCE)-MRI is being used to evaluate vascular density, vascular morphology, and detection of aggressive breast cancer biology. While DCE-MRI is a highly sensitive tool, there are specific features that limit computational evaluation of blood vessels. These include (1) DCE-MRI evaluates gadolinium contrast and does not directly evaluate biology, (2) the resolution of DCE-MRI is insufficient for imaging small blood vessels, and (3) DCE-MRI images are very difficult to co-register. Here we review computational approaches for detection and analysis of blood vessels in DCE-MRI images and present some of the strategies we have developed for co-registry of DCE-MRI images and early detection of vascularization.
Collapse
Affiliation(s)
- David E. Frankhouser
- Department of Population Sciences, City of Hope National Medical Center, Duarte, CA, United States
| | - Eric Dietze
- Department of Population Sciences, City of Hope National Medical Center, Duarte, CA, United States
| | - Ashish Mahabal
- Department of Astronomy, Division of Physics, Mathematics, and Astronomy, California Institute of Technology (Caltech), Pasadena, CA, United States
| | - Victoria L. Seewaldt
- Department of Population Sciences, City of Hope National Medical Center, Duarte, CA, United States
| |
Collapse
|
17
|
Bauer E, Levy MS, Domachevsky L, Anaby D, Nissan N. Background parenchymal enhancement and uptake as breast cancer imaging biomarkers: A state-of-the-art review. Clin Imaging 2021; 83:41-50. [PMID: 34953310 DOI: 10.1016/j.clinimag.2021.11.021] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 10/29/2021] [Accepted: 11/15/2021] [Indexed: 12/20/2022]
Abstract
Within the past decade, background parenchymal enhancement (BPE) and background parenchymal uptake (BPU) have emerged as novel imaging-derived biomarkers in the diagnosis and treatment monitoring of breast cancer. Growing evidence supports the role of breast parenchyma vascularity and metabolic activity as probable risk factors for breast cancer development. Furthermore, in the presence of a newly-diagnosed breast cancer, added clinically-relevant data was surprisingly found in the respective imaging properties of the non-affected contralateral breast. Evaluation of the contralateral BPE and BPU have been found to be especially instrumental in predicting the prognosis of a patient with breast cancer and even anticipating their response to neoadjuvant chemotherapy. Simultaneously, further research has found a link between these two biomarkers, even though they represent different physical properties. The aim of this review is to provide an up to date summary of the current clinical applications of BPE and BPU as breast cancer imaging biomarkers with the hope that it propels their further usage in clinical practice.
Collapse
Affiliation(s)
- Ethan Bauer
- Department of Radiology, Sheba Medical Center, Israel; Sackler School of Medicine, Tel Aviv University, Israel
| | - Miri Sklair Levy
- Department of Radiology, Sheba Medical Center, Israel; Sackler School of Medicine, Tel Aviv University, Israel
| | - Liran Domachevsky
- Department of Radiology, Sheba Medical Center, Israel; Sackler School of Medicine, Tel Aviv University, Israel
| | - Debbie Anaby
- Department of Radiology, Sheba Medical Center, Israel; Sackler School of Medicine, Tel Aviv University, Israel
| | - Noam Nissan
- Department of Radiology, Sheba Medical Center, Israel; Sackler School of Medicine, Tel Aviv University, Israel.
| |
Collapse
|
18
|
Chalfant JS, Mortazavi S, Lee-Felker SA. Background Parenchymal Enhancement on Breast MRI: Assessment and Clinical Implications. CURRENT RADIOLOGY REPORTS 2021. [DOI: 10.1007/s40134-021-00386-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Abstract
Purpose of Review
To present recent literature regarding the assessment and clinical implications of background parenchymal enhancement on breast MRI.
Recent Findings
The qualitative assessment of BPE remains variable within the literature, as well as in clinical practice. Several different quantitative approaches have been investigated in recent years, most commonly region of interest-based and segmentation-based assessments. However, quantitative assessment has not become standard in clinical practice to date. Numerous studies have demonstrated a clear association between higher BPE and future breast cancer risk. While higher BPE does not appear to significantly impact cancer detection, it may result in a higher abnormal interpretation rate. BPE is also likely a marker of pathologic complete response after neoadjuvant chemotherapy, with decreases in BPE during and after neoadjuvant chemotherapy correlated with pCR. In contrast, pre-treatment BPE does not appear to be predictive of pCR. The association between BPE and prognosis is less clear, with heterogeneous results in the literature.
Summary
Assessment of BPE continues to evolve, with heterogeneity in approaches to both qualitative and quantitative assessment. The level of BPE has important clinical implications, with associations with future breast cancer risk and treatment response. BPE may also be an imaging marker of prognosis, but future research is needed on this topic.
Collapse
|
19
|
The Value of Convolutional Neural Network-Based Magnetic Resonance Imaging Image Segmentation Algorithm to Guide Targeted Controlled Release of Doxorubicin Nanopreparation. CONTRAST MEDIA & MOLECULAR IMAGING 2021; 2021:9032017. [PMID: 34385899 PMCID: PMC8331278 DOI: 10.1155/2021/9032017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Accepted: 07/17/2021] [Indexed: 12/29/2022]
Abstract
There was an investigation of the auxiliary role of convolutional neural network- (CNN-) based magnetic resonance imaging (MRI) image segmentation algorithm in MRI image-guided targeted drug therapy of doxorubicin nanomaterials so that the value of drug-controlled release in liver cancer patients was evaluated. In this study, 80 patients with liver cancer were selected as the research objects. It was hoped that the CNN-based MRI image segmentation algorithm could be applied to the guided analysis of MRI images of the targeted controlled release of doxorubicin nanopreparation to analyze the imaging analysis effect of this algorithm on the targeted treatment of liver cancer with doxorubicin nanopreparation. The results of this study showed that the upgraded three-dimensional (3D) CNN-based MRI image segmentation had a better effect compared with the traditional CNN-based MRI image segmentation, with significant improvement in indicators such as accuracy, precision, sensitivity, and specificity, and the differences were all statistically marked (p < 0.05). In the monitoring of the targeted drug therapy of doxorubicin nanopreparation for liver cancer patients, it was found that the MRI images of liver cancer patients processed by 3D CNN-based MRI image segmentation neural algorithm could be observed more intuitively and guided to accurately reach the target of liver cancer. The accuracy of targeted release determination of nanopreparation reached 80 ± 6.25%, which was higher markedly than that of the control group (66.6 ± 5.32%) (p < 0.05). In a word, the MRI image segmentation algorithm based on CNN had good application potential in guiding patients with liver cancer for targeted therapy with doxorubicin nanopreparation, which was worth promoting in the adjuvant treatment of targeted drugs for cancer.
Collapse
|
20
|
Development of U-Net Breast Density Segmentation Method for Fat-Sat MR Images Using Transfer Learning Based on Non-Fat-Sat Model. J Digit Imaging 2021; 34:877-887. [PMID: 34244879 PMCID: PMC8455741 DOI: 10.1007/s10278-021-00472-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Revised: 05/27/2021] [Accepted: 06/09/2021] [Indexed: 12/11/2022] Open
Abstract
To develop a U-net deep learning method for breast tissue segmentation on fat-sat T1-weighted (T1W) MRI using transfer learning (TL) from a model developed for non-fat-sat images. The training dataset (N = 126) was imaged on a 1.5 T MR scanner, and the independent testing dataset (N = 40) was imaged on a 3 T scanner, both using fat-sat T1W pulse sequence. Pre-contrast images acquired in the dynamic-contrast-enhanced (DCE) MRI sequence were used for analysis. All patients had unilateral cancer, and the segmentation was performed using the contralateral normal breast. The ground truth of breast and fibroglandular tissue (FGT) segmentation was generated using a template-based segmentation method with a clustering algorithm. The deep learning segmentation was performed using U-net models trained with and without TL, by using initial values of trainable parameters taken from the previous model for non-fat-sat images. The ground truth of each case was used to evaluate the segmentation performance of the U-net models by calculating the dice similarity coefficient (DSC) and the overall accuracy based on all pixels. Pearson’s correlation was used to evaluate the correlation of breast volume and FGT volume between the U-net prediction output and the ground truth. In the training dataset, the evaluation was performed using tenfold cross-validation, and the mean DSC with and without TL was 0.97 vs. 0.95 for breast and 0.86 vs. 0.80 for FGT. When the final model developed with and without TL from the training dataset was applied to the testing dataset, the mean DSC was 0.89 vs. 0.83 for breast and 0.81 vs. 0.81 for FGT, respectively. Application of TL not only improved the DSC, but also decreased the required training case number. Lastly, there was a high correlation (R2 > 0.90) for both the training and testing datasets between the U-net prediction output and ground truth for breast volume and FGT volume. U-net can be applied to perform breast tissue segmentation on fat-sat images, and TL is an efficient strategy to develop a specific model for each different dataset.
Collapse
|
21
|
Wei D, Jahani N, Cohen E, Weinstein S, Hsieh MK, Pantalone L, Kontos D. Fully automatic quantification of fibroglandular tissue and background parenchymal enhancement with accurate implementation for axial and sagittal breast MRI protocols. Med Phys 2020; 48:238-252. [PMID: 33150617 DOI: 10.1002/mp.14581] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Revised: 10/05/2020] [Accepted: 10/23/2020] [Indexed: 01/03/2023] Open
Abstract
PURPOSE To propose and evaluate a fully automated technique for quantification of fibroglandular tissue (FGT) and background parenchymal enhancement (BPE) in breast MRI. METHODS We propose a fully automated method, where after preprocessing, FGT is segmented in T1-weighted, nonfat-saturated MRI. Incorporating an anatomy-driven prior probability for FGT and robust texture descriptors against intensity variations, our method effectively addresses major image processing challenges, including wide variations in breast anatomy and FGT appearance among individuals. Our framework then propagates this segmentation to dynamic contrast-enhanced (DCE)-MRI to quantify BPE within the segmented FGT regions. Axial and sagittal image data from 40 cancer-unaffected women were used to evaluate our proposed method vs a manually annotated reference standard. RESULTS High spatial correspondence was observed between the automatic and manual FGT segmentation (mean Dice similarity coefficient 81.14%). The FGT and BPE quantifications (denoted FGT% and BPE%) indicated high correlation (Pearson's r = 0.99 for both) between automatic and manual segmentations. Furthermore, the differences between the FGT% and BPE% quantified using automatic and manual segmentations were low (mean differences: -0.66 ± 2.91% for FGT% and -0.17 ± 1.03% for BPE%). When correlated with qualitative clinical BI-RADS ratings, the correlation coefficient for FGT% was still high (Spearman's ρ = 0.92), whereas that for BPE was lower (ρ = 0.65). Our proposed approach also performed significantly better than a previously validated method for sagittal breast MRI. CONCLUSIONS Our method demonstrated accurate fully automated quantification of FGT and BPE in both sagittal and axial breast MRI. Our results also suggested the complexity of BPE assessment, demonstrating relatively low correlation between segmentation and clinical rating.
Collapse
Affiliation(s)
- Dong Wei
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA.,Tencent Jarvis Lab, Shenzhen, Guangdong, 518057, China
| | - Nariman Jahani
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Eric Cohen
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Susan Weinstein
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Meng-Kang Hsieh
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Lauren Pantalone
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Despina Kontos
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|
22
|
Nam Y, Park GE, Kang J, Kim SH. Fully Automatic Assessment of Background Parenchymal Enhancement on Breast MRI Using Machine-Learning Models. J Magn Reson Imaging 2020; 53:818-826. [PMID: 33219624 DOI: 10.1002/jmri.27429] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 10/15/2020] [Accepted: 10/16/2020] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Automated measurement and classification models with objectivity and reproducibility are required for accurate evaluation of the breast cancer risk of fibroglandular tissue (FGT) and background parenchymal enhancement (BPE). PURPOSE To develop and evaluate a machine-learning algorithm for breast FGT segmentation and BPE classification. STUDY TYPE Retrospective. POPULATION A total of 794 patients with breast cancer, 594 patients assigned to the development set, and 200 patients to the test set. FIELD STRENGTH/SEQUENCE 3T and 1.5T; T2 -weighted, fat-saturated T1 -weighted (T1 W) with dynamic contrast enhancement (DCE). ASSESSMENT Manual segmentation was performed for the whole breast and FGT regions in the contralateral breast. The BPE region was determined by thresholding using the subtraction of the pre- and postcontrast T1 W images and the segmented FGT mask. Two radiologists independently assessed the categories of FGT and BPE. A deep-learning-based algorithm was designed to segment and measure the volume of whole breast and FGT and classify the grade of BPE. STATISTICAL TESTS Dice similarity coefficients (DSC) and Spearman correlation analysis were used to compare the volumes from the manual and deep-learning-based segmentations. Kappa statistics were used for agreement analysis. Comparison of area under the receiver operating characteristic (ROC) curves (AUC) and F1 scores were calculated to evaluate the performance of BPE classification. RESULTS The mean (±SD) DSC for manual and deep-learning segmentations was 0.85 ± 0.11. The correlation coefficient for FGT volume from manual- and deep-learning-based segmentations was 0.93. Overall accuracy of manual segmentation and deep-learning segmentation in BPE classification task was 66% and 67%, respectively. For binary categorization of BPE grade (minimal/mild vs. moderate/marked), overall accuracy increased to 91.5% in manual segmentation and 90.5% in deep-learning segmentation; the AUC was 0.93 in both methods. DATA CONCLUSION This deep-learning-based algorithm can provide reliable segmentation and classification results for BPE. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY STAGE: 2.
Collapse
Affiliation(s)
- Yoonho Nam
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.,Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea
| | - Ga Eun Park
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Junghwa Kang
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea
| | - Sung Hun Kim
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| |
Collapse
|
23
|
Volumetric breast density estimation on MRI using explainable deep learning regression. Sci Rep 2020; 10:18095. [PMID: 33093572 PMCID: PMC7581772 DOI: 10.1038/s41598-020-75167-6] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 10/12/2020] [Indexed: 01/10/2023] Open
Abstract
To purpose of this paper was to assess the feasibility of volumetric breast density estimations on MRI without segmentations accompanied with an explainability step. A total of 615 patients with breast cancer were included for volumetric breast density estimation. A 3-dimensional regression convolutional neural network (CNN) was used to estimate the volumetric breast density. Patients were split in training (N = 400), validation (N = 50), and hold-out test set (N = 165). Hyperparameters were optimized using Neural Network Intelligence and augmentations consisted of translations and rotations. The estimated densities were evaluated to the ground truth using Spearman’s correlation and Bland–Altman plots. The output of the CNN was visually analyzed using SHapley Additive exPlanations (SHAP). Spearman’s correlation between estimated and ground truth density was ρ = 0.81 (N = 165, P < 0.001) in the hold-out test set. The estimated density had a median bias of 0.70% (95% limits of agreement = − 6.8% to 5.0%) to the ground truth. SHAP showed that in correct density estimations, the algorithm based its decision on fibroglandular and fatty tissue. In incorrect estimations, other structures such as the pectoral muscle or the heart were included. To conclude, it is feasible to automatically estimate volumetric breast density on MRI without segmentations, and to provide accompanying explanations.
Collapse
|
24
|
Singh SP, Wang L, Gupta S, Goli H, Padmanabhan P, Gulyás B. 3D Deep Learning on Medical Images: A Review. SENSORS (BASEL, SWITZERLAND) 2020; 20:E5097. [PMID: 32906819 PMCID: PMC7570704 DOI: 10.3390/s20185097] [Citation(s) in RCA: 197] [Impact Index Per Article: 39.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 08/31/2020] [Accepted: 09/03/2020] [Indexed: 12/20/2022]
Abstract
The rapid advancements in machine learning, graphics processing technologies and the availability of medical imaging data have led to a rapid increase in the use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural network (CNN) based architectures, which were adopted by the medical imaging community to assist clinicians in disease diagnosis. Since the grand success of AlexNet in 2012, CNNs have been increasingly used in medical image analysis to improve the efficiency of human clinicians. In recent years, three-dimensional (3D) CNNs have been employed for the analysis of medical images. In this paper, we trace the history of how the 3D CNN was developed from its machine learning roots, we provide a brief mathematical description of 3D CNN and provide the preprocessing steps required for medical images before feeding them to 3D CNNs. We review the significant research in the field of 3D medical imaging analysis using 3D CNNs (and its variants) in different medical areas such as classification, segmentation, detection and localization. We conclude by discussing the challenges associated with the use of 3D CNNs in the medical imaging domain (and the use of deep learning models in general) and possible future trends in the field.
Collapse
Affiliation(s)
- Satya P. Singh
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 608232, Singapore; (S.P.S.); (B.G.)
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore 636921, Singapore
| | - Lipo Wang
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore;
| | - Sukrit Gupta
- School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore; (S.G.); (H.G.)
| | - Haveesh Goli
- School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore; (S.G.); (H.G.)
| | - Parasuraman Padmanabhan
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 608232, Singapore; (S.P.S.); (B.G.)
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore 636921, Singapore
| | - Balázs Gulyás
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 608232, Singapore; (S.P.S.); (B.G.)
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore 636921, Singapore
- Department of Clinical Neuroscience, Karolinska Institute, 17176 Stockholm, Sweden
| |
Collapse
|
25
|
Borkowski K, Rossi C, Ciritsis A, Marcon M, Hejduk P, Stieb S, Boss A, Berger N. Fully automatic classification of breast MRI background parenchymal enhancement using a transfer learning approach. Medicine (Baltimore) 2020; 99:e21243. [PMID: 32702902 PMCID: PMC7373599 DOI: 10.1097/md.0000000000021243] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Marked enhancement of the fibroglandular tissue on contrast-enhanced breast magnetic resonance imaging (MRI) may affect lesion detection and classification and is suggested to be associated with higher risk of developing breast cancer. The background parenchymal enhancement (BPE) is qualitatively classified according to the BI-RADS atlas into the categories "minimal," "mild," "moderate," and "marked." The purpose of this study was to train a deep convolutional neural network (dCNN) for standardized and automatic classification of BPE categories.This IRB-approved retrospective study included 11,769 single MR images from 149 patients. The MR images were derived from the subtraction between the first post-contrast volume and the native T1-weighted images. A hierarchic approach was implemented relying on 2 dCNN models for detection of MR-slices imaging breast tissue and for BPE classification, respectively. Data annotation was performed by 2 board-certified radiologists. The consensus of the 2 radiologists was chosen as reference for BPE classification. The clinical performances of the single readers and of the dCNN were statistically compared using the quadratic Cohen's kappa.Slices depicting the breast were classified with training, validation, and real-world (test) accuracies of 98%, 96%, and 97%, respectively. Over the 4 classes, the BPE classification was reached with mean accuracies of 74% for training, 75% for the validation, and 75% for the real word dataset. As compared to the reference, the inter-reader reliabilities for the radiologists were 0.780 (reader 1) and 0.679 (reader 2). On the other hand, the reliability for the dCNN model was 0.815.Automatic classification of BPE can be performed with high accuracy and support the standardization of tissue classification in MRI.
Collapse
|
26
|
Automatic Breast and Fibroglandular Tissue Segmentation in Breast MRI Using Deep Learning by a Fully-Convolutional Residual Neural Network U-Net. Acad Radiol 2019; 26:1526-1535. [PMID: 30713130 DOI: 10.1016/j.acra.2019.01.012] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2018] [Revised: 01/03/2019] [Accepted: 01/13/2019] [Indexed: 12/17/2022]
Abstract
RATIONALE AND OBJECTIVES Breast segmentation using the U-net architecture was implemented and tested in independent validation datasets to quantify fibroglandular tissue volume in breast MRI. MATERIALS AND METHODS Two datasets were used. The training set was MRI of 286 patients with unilateral breast cancer. The segmentation was done on the contralateral normal breasts. The ground truth for the breast and fibroglandular tissue (FGT) was obtained by using a template-based segmentation method. The U-net deep learning algorithm was implemented to analyze the training set, and the final model was obtained using 10-fold cross-validation. The independent validation set was MRI of 28 normal volunteers acquired using four different MR scanners. Dice Similarity Coefficient (DSC), voxel-based accuracy, and Pearson's correlation were used to evaluate the performance. RESULTS For the 10-fold cross-validation in the initial training set of 286 patients, the DSC range was 0.83-0.98 (mean 0.95 ± 0.02) for breast and 0.73-0.97 (mean 0.91 ± 0.03) for FGT; and the accuracy range was 0.92-0.99 (mean 0.98 ± 0.01) for breast and 0.87-0.99 (mean 0.97 ± 0.01) for FGT. For the entire 224 testing breasts of the 28 normal volunteers in the validation datasets, the mean DSC was 0.86 ± 0.05 for breast, 0.83 ± 0.06 for FGT; and the mean accuracy was 0.94 ± 0.03 for breast and 0.93 ± 0.04 for FGT. The testing results for MRI acquired using four different scanners were comparable. CONCLUSION Deep learning based on the U-net algorithm can achieve accurate segmentation results for the breast and FGT on MRI. It may provide a reliable and efficient method to process large number of MR images for quantitative analysis of breast density.
Collapse
|
27
|
Kurata Y, Nishio M, Kido A, Fujimoto K, Yakami M, Isoda H, Togashi K. Automatic segmentation of the uterus on MRI using a convolutional neural network. Comput Biol Med 2019; 114:103438. [PMID: 31521902 DOI: 10.1016/j.compbiomed.2019.103438] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 08/20/2019] [Accepted: 09/04/2019] [Indexed: 01/11/2023]
Abstract
BACKGROUND This study was performed to evaluate the clinical feasibility of a U-net for fully automatic uterine segmentation on MRI by using images of major uterine disorders. METHODS This study included 122 female patients (14 with uterine endometrial cancer, 15 with uterine cervical cancer, and 55 with uterine leiomyoma). U-net architecture optimized for our research was used for automatic segmentation. Three-fold cross-validation was performed for validation. The results of manual segmentation of the uterus by a radiologist on T2-weighted sagittal images were used as the gold standard. Dice similarity coefficient (DSC) and mean absolute distance (MAD) were used for quantitative evaluation of the automatic segmentation. Visual evaluation using a 4-point scale was performed by two radiologists. DSC, MAD, and the score of the visual evaluation were compared between uteruses with and without uterine disorders. RESULTS The mean DSC of our model for all patients was 0.82. The mean DSCs for patients with and without uterine disorders were 0.84 and 0.78, respectively (p = 0.19). The mean MADs for patients with and without uterine disorders were 18.5 and 21.4 [pixels], respectively (p = 0.39). The scores of the visual evaluation were not significantly different between uteruses with and without uterine disorders. CONCLUSIONS Fully automatic uterine segmentation with our modified U-net was clinically feasible. The performance of the segmentation of our model was not influenced by the presence of uterine disorders.
Collapse
Affiliation(s)
- Yasuhisa Kurata
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan; Department of Diagnostic Radiology, Kobe City Medical Center General Hospital, 2-1-1, Minatojimaminamimachi, Chuo-ku, Kobe, Hyogo, 650-0047, Japan
| | - Mizuho Nishio
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan; Preemptive Medicine and Lifestyle-Related Disease Research Center, Kyoto University Hospital, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan.
| | - Aki Kido
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Koji Fujimoto
- Human Brain Research Center Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Masahiro Yakami
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan; Preemptive Medicine and Lifestyle-Related Disease Research Center, Kyoto University Hospital, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Hiroyoshi Isoda
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan; Preemptive Medicine and Lifestyle-Related Disease Research Center, Kyoto University Hospital, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Kaori Togashi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| |
Collapse
|
28
|
Reig B, Heacock L, Geras KJ, Moy L. Machine learning in breast MRI. J Magn Reson Imaging 2019; 52:998-1018. [PMID: 31276247 DOI: 10.1002/jmri.26852] [Citation(s) in RCA: 93] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Revised: 06/18/2019] [Accepted: 06/19/2019] [Indexed: 12/13/2022] Open
Abstract
Machine-learning techniques have led to remarkable advances in data extraction and analysis of medical imaging. Applications of machine learning to breast MRI continue to expand rapidly as increasingly accurate 3D breast and lesion segmentation allows the combination of radiologist-level interpretation (eg, BI-RADS lexicon), data from advanced multiparametric imaging techniques, and patient-level data such as genetic risk markers. Advances in breast MRI feature extraction have led to rapid dataset analysis, which offers promise in large pooled multiinstitutional data analysis. The object of this review is to provide an overview of machine-learning and deep-learning techniques for breast MRI, including supervised and unsupervised methods, anatomic breast segmentation, and lesion segmentation. Finally, it explores the role of machine learning, current limitations, and future applications to texture analysis, radiomics, and radiogenomics. Level of Evidence: 3 Technical Efficacy Stage: 2 J. Magn. Reson. Imaging 2019. J. Magn. Reson. Imaging 2020;52:998-1018.
Collapse
Affiliation(s)
- Beatriu Reig
- The Department of Radiology, New York University School of Medicine, New York, New York, USA
| | - Laura Heacock
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, New York, USA
| | - Krzysztof J Geras
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, New York, USA
| | - Linda Moy
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, New York, USA.,Center for Advanced Imaging Innovation and Research (CAI2 R), New York University School of Medicine, New York, New York, USA
| |
Collapse
|
29
|
Liao GJ, Henze Bancroft LC, Strigel RM, Chitalia RD, Kontos D, Moy L, Partridge SC, Rahbar H. Background parenchymal enhancement on breast MRI: A comprehensive review. J Magn Reson Imaging 2019; 51:43-61. [PMID: 31004391 DOI: 10.1002/jmri.26762] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Revised: 04/09/2019] [Accepted: 04/09/2019] [Indexed: 12/22/2022] Open
Abstract
The degree of normal fibroglandular tissue that enhances on breast MRI, known as background parenchymal enhancement (BPE), was initially described as an incidental finding that could affect interpretation performance. While BPE is now established to be a physiologic phenomenon that is affected by both endogenous and exogenous hormone levels, evidence supporting the notion that BPE frequently masks breast cancers is limited. However, compelling data have emerged to suggest BPE is an independent marker of breast cancer risk and breast cancer treatment outcomes. Specifically, multiple studies have shown that elevated BPE levels, measured qualitatively or quantitatively, are associated with a greater risk of developing breast cancer. Evidence also suggests that BPE could be a predictor of neoadjuvant breast cancer treatment response and overall breast cancer treatment outcomes. These discoveries come at a time when breast cancer screening and treatment have moved toward an increased emphasis on targeted and individualized approaches, of which the identification of imaging features that can predict cancer diagnosis and treatment response is an increasingly recognized component. Historically, researchers have primarily studied quantitative tumor imaging features in pursuit of clinically useful biomarkers. However, the need to segment less well-defined areas of normal tissue for quantitative BPE measurements presents its own unique challenges. Furthermore, there is no consensus on the optimal timing on dynamic contrast-enhanced MRI for BPE quantitation. This article comprehensively reviews BPE with a particular focus on its potential to increase precision approaches to breast cancer risk assessment, diagnosis, and treatment. It also describes areas of needed future research, such as the applicability of BPE to women at average risk, the biological underpinnings of BPE, and the standardization of BPE characterization. Level of Evidence: 3 Technical Efficacy Stage: 5 J. Magn. Reson. Imaging 2020;51:43-61.
Collapse
Affiliation(s)
- Geraldine J Liao
- Department of Radiology, University of Washington School of Medicine, Seattle, Washington, USA.,Department of Radiology, Virginia Mason Medical Center, Seattle, Washington, USA
| | | | - Roberta M Strigel
- Department of Radiology, University of Wisconsin, Madison, Wisconsin, USA.,Department of Medical Physics, University of Wisconsin, Madison, Wisconsin, USA.,Carbone Cancer Center, University of Wisconsin, Madison, Wisconsin, USA
| | - Rhea D Chitalia
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Despina Kontos
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Linda Moy
- Department of Radiology, New York University School of Medicine, New York, New York, USA
| | - Savannah C Partridge
- Department of Radiology, University of Washington School of Medicine, Seattle, Washington, USA
| | - Habib Rahbar
- Department of Radiology, University of Washington School of Medicine, Seattle, Washington, USA
| |
Collapse
|
30
|
Accuracy of Distinguishing Atypical Ductal Hyperplasia From Ductal Carcinoma In Situ With Convolutional Neural Network-Based Machine Learning Approach Using Mammographic Image Data. AJR Am J Roentgenol 2019; 212:1166-1171. [PMID: 30860901 PMCID: PMC8111785 DOI: 10.2214/ajr.18.20250] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
OBJECTIVE. The purpose of this study was to test the hypothesis that convolutional neural networks can be used to predict which patients with pure atypical ductal hyperplasia (ADH) may be safely monitored rather than undergo surgery. MATERIALS AND METHODS. A total of 298 unique images from 149 patients were used for our convolutional neural network algorithm. A total of 134 images from 67 patients with ADH that had been diagnosed by stereotactic-guided biopsy of calcifications but had not been upgraded to ductal carcinoma in situ or invasive cancer at the time of surgical excision. A total of 164 images from 82 patients with mammographic calcifications indicated that ductal carcinoma in situ was the final diagnosis. Two standard mammographic magnification views of the calcifications (a craniocaudal view and a mediolateral or lateromedial view) were used for analysis. Calcifications were segmented using an open-source software platform and images were resized to fit a bounding box of 128 × 128 pixels. A topology with 15 hidden layers was used to implement the convolutional neural network. The network architecture contained five residual layers and dropout of 0.25 after each convolution. Patients were randomly separated into a training-and-validation set (80% of patients) and a test set (20% of patients). Code was implemented using open-source software on a workstation with an open-source operating system and a graphics card. RESULTS. The AUC value was 0.86 (95% CI, ± 0.03) for the test set. Aggregate sensitivity and specificity were 84.6% (95% CI, ± 4.0%) and 88.2% (95% CI, ± 3.0%), respectively. Diagnostic accuracy was 86.7% (95% CI, ± 2.9). CONCLUSION. It is feasible to apply convolutional neural networks to distinguish pure atypical ductal hyperplasia from ductal carcinoma in situ with the use of mammographic images. A larger dataset will likely result in further improvement of our prediction model.
Collapse
|