1
|
Deep Learning Models for Automated Assessment of Breast Density Using Multiple Mammographic Image Types. Cancers (Basel) 2022; 14:cancers14205003. [PMID: 36291787 PMCID: PMC9599904 DOI: 10.3390/cancers14205003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 10/09/2022] [Accepted: 10/10/2022] [Indexed: 11/24/2022] Open
Abstract
Simple Summary The DL model predictions in automated breast density assessment were independent of the imaging technologies, moderately or substantially agreed with the clinical reader density values, and had improved performance as compared to inclusion of commercial software values. Abstract Recently, convolutional neural network (CNN) models have been proposed to automate the assessment of breast density, breast cancer detection or risk stratification using single image modality. However, analysis of breast density using multiple mammographic types using clinical data has not been reported in the literature. In this study, we investigate pre-trained EfficientNetB0 deep learning (DL) models for automated assessment of breast density using multiple mammographic types with and without clinical information to improve reliability and versatility of reporting. 120,000 for-processing and for-presentation full-field digital mammograms (FFDM), digital breast tomosynthesis (DBT), and synthesized 2D images from 5032 women were retrospectively analyzed. Each participant underwent up to 3 screening examinations and completed a questionnaire at each screening encounter. Pre-trained EfficientNetB0 DL models with or without clinical history were optimized. The DL models were evaluated using BI-RADS (fatty, scattered fibroglandular densities, heterogeneously dense, or extremely dense) versus binary (non-dense or dense) density classification. Pre-trained EfficientNetB0 model performances were compared using inter-observer and commercial software (Volpara) variabilities. Results show that the average Fleiss’ Kappa score between-observers ranged from 0.31–0.50 and 0.55–0.69 for the BI-RADS and binary classifications, respectively, showing higher uncertainty among experts. Volpara-observer agreement was 0.33 and 0.54 for BI-RADS and binary classifications, respectively, showing fair to moderate agreement. However, our proposed pre-trained EfficientNetB0 DL models-observer agreement was 0.61–0.66 and 0.70–0.75 for BI-RADS and binary classifications, respectively, showing moderate to substantial agreement. Overall results show that the best breast density estimation was achieved using for-presentation FFDM and DBT images without added clinical information. Pre-trained EfficientNetB0 model can automatically assess breast density from any images modality type, with the best results obtained from for-presentation FFDM and DBT, which are the most common image archived in clinical practice.
Collapse
|
2
|
Ueki W, Nishii T, Umehara K, Ota J, Higuchi S, Ohta Y, Nagai Y, Murakawa K, Ishida T, Fukuda T. Generative adversarial network-based post-processed image super-resolution technology for accelerating brain MRI: comparison with compressed sensing. Acta Radiol 2022; 64:336-345. [PMID: 35118883 DOI: 10.1177/02841851221076330] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
BACKGROUND It is unclear whether deep-learning-based super-resolution technology (SR) or compressed sensing technology (CS) can accelerate magnetic resonance imaging (MRI) . PURPOSE To compare SR accelerated images with CS images regarding the image similarity to reference 2D- and 3D gradient-echo sequence (GRE) brain MRI. MATERIAL AND METHODS We prospectively acquired 1.3× and 2.0× faster 2D and 3D GRE images of 20 volunteers from the reference time by reducing the matrix size or increasing the CS factor. For SR, we trained the generative adversarial network (GAN), upscaling the low-resolution images to the reference images with twofold cross-validation. We compared the structural similarity (SSIM) index of accelerated images to the reference image. The rate of incorrect answers of a radiologist discriminating faster and reference image was used as a subjective image similarity (ISM) index. RESULTS The SR demonstrated significantly higher SSIM than the CS (SSIM=0.9993-0.999 vs. 0.9947-0.9986; P < 0.001). In 2D GRE, it was challenging to discriminate the SR image from the reference image, compared to the CS (ISM index 40% vs. 17.5% in 1.3×; P = 0.039 and 17.5% vs. 2.5% in 2.0×; P = 0.034). In 3D GRE, the CS revealed a significantly higher ISM index than the SR (22.5% vs. 2.5%; P = 0.011) in 2.0 × faster images. However, the ISM index was identical for the 2.0× CS and 1.3× SR (22.5% vs. 27.5%; P = 0.62) with comparable time costs. CONCLUSION The GAN-based SR outperformed CS in image similarity with 2D GRE for MRI acceleration. In addition, CS was more advantageous in 3D GRE than SR.
Collapse
Affiliation(s)
- Wataru Ueki
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita, Osaka, Japan
| | - Tatsuya Nishii
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita, Osaka, Japan
| | - Kensuke Umehara
- Medical Informatics Section, QST Hospital, National Institutes for Quantum Science and Technology, Chiba, Japan
- Applied MRI Research, Department of Molecular Imaging and Theranostics, Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology, Chiba, Japan
- Department of Medical Physics and Engineering, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Junko Ota
- Medical Informatics Section, QST Hospital, National Institutes for Quantum Science and Technology, Chiba, Japan
- Applied MRI Research, Department of Molecular Imaging and Theranostics, Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology, Chiba, Japan
- Department of Medical Physics and Engineering, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Satoshi Higuchi
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita, Osaka, Japan
| | - Yasutoshi Ohta
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita, Osaka, Japan
| | - Yasuhiro Nagai
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita, Osaka, Japan
| | - Keizo Murakawa
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita, Osaka, Japan
| | - Takayuki Ishida
- Department of Medical Physics and Engineering, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Tetsuya Fukuda
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita, Osaka, Japan
| |
Collapse
|
3
|
Wei P. Radiomics, deep learning and early diagnosis in oncology. Emerg Top Life Sci 2021; 5:829-835. [PMID: 34874454 PMCID: PMC8786297 DOI: 10.1042/etls20210218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 11/02/2021] [Accepted: 11/03/2021] [Indexed: 11/17/2022]
Abstract
Medical imaging, including X-ray, computed tomography (CT), and magnetic resonance imaging (MRI), plays a critical role in early detection, diagnosis, and treatment response prediction of cancer. To ease radiologists' task and help with challenging cases, computer-aided diagnosis has been developing rapidly in the past decade, pioneered by radiomics early on, and more recently, driven by deep learning. In this mini-review, I use breast cancer as an example and review how medical imaging and its quantitative modeling, including radiomics and deep learning, have improved the early detection and treatment response prediction of breast cancer. I also outline what radiomics and deep learning share in common and how they differ in terms of modeling procedure, sample size requirement, and computational implementation. Finally, I discuss the challenges and efforts entailed to integrate deep learning models and software in clinical practice.
Collapse
Affiliation(s)
- Peng Wei
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas, U.S.A
| |
Collapse
|
4
|
Jia Q, Shu H. BiTr-Unet: a CNN-Transformer Combined Network for MRI Brain Tumor Segmentation. BRAINLESION : GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES. BRAINLES (WORKSHOP) 2021; 2021:3-14. [PMID: 36005929 PMCID: PMC9396958 DOI: 10.1007/978-3-031-09002-8_1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Convolutional neural networks (CNNs) have achieved remarkable success in automatically segmenting organs or lesions on 3D medical images. Recently, vision transformer networks have exhibited exceptional performance in 2D image classification tasks. Compared with CNNs, transformer networks have an appealing advantage of extracting long-range features due to their self-attention algorithm. Therefore, we propose a CNN-Transformer combined model, called BiTr-Unet, with specific modifications for brain tumor segmentation on multi-modal MRI scans. Our BiTr-Unet achieves good performance on the BraTS2021 validation dataset with median Dice score 0.9335, 0.9304 and 0.8899, and median Hausdor_ distance 2.8284, 2.2361 and 1.4142 for the whole tumor, tumor core, and enhancing tumor, respectively. On the BraTS2021 testing dataset, the corresponding results are 0.9257, 0.9350 and 0.8874 for Dice score, and 3, 2.2361 and 1.4142 for Hausdorff distance. The code is publicly available at https://github.com/JustaTinyDot/BiTr-Unet.
Collapse
Affiliation(s)
- Qiran Jia
- Department of Biostatistics, School of Global Public Health, New York University, New York, NY 10003, USA
| | - Hai Shu
- Department of Biostatistics, School of Global Public Health, New York University, New York, NY 10003, USA
| |
Collapse
|
5
|
Chan HP. Promise and Potential Pitfalls: Re-creating Images or Generating New Images for AI Modeling. Radiol Artif Intell 2021; 3:e210102. [PMID: 34350415 PMCID: PMC8328104 DOI: 10.1148/ryai.2021210102] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Revised: 04/16/2021] [Accepted: 04/19/2021] [Indexed: 11/11/2022]
Affiliation(s)
- Heang-Ping Chan
- From the Department of Radiology, University of Michigan, 1500 E
Medical Center Dr, Med Inn Building C477, Ann Arbor, MI 48109-5842
| |
Collapse
|