1
|
Ham S, Kim M, Lee S, Wang CB, Ko B, Kim N. Improvement of semantic segmentation through transfer learning of multi-class regions with convolutional neural networks on supine and prone breast MRI images. Sci Rep 2023; 13:6877. [PMID: 37106024 PMCID: PMC10140273 DOI: 10.1038/s41598-023-33900-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 04/20/2023] [Indexed: 04/29/2023] Open
Abstract
Semantic segmentation of breast and surrounding tissues in supine and prone breast magnetic resonance imaging (MRI) is required for various kinds of computer-assisted diagnoses for surgical applications. Variability of breast shape in supine and prone poses along with various MRI artifacts makes it difficult to determine robust breast and surrounding tissue segmentation. Therefore, we evaluated semantic segmentation with transfer learning of convolutional neural networks to create robust breast segmentation in supine breast MRI without considering supine or prone positions. Total 29 patients with T1-weighted contrast-enhanced images were collected at Asan Medical Center and two types of breast MRI were performed in the prone position and the supine position. The four classes, including lungs and heart, muscles and bones, parenchyma with cancer, and skin and fat, were manually drawn by an expert. Semantic segmentation on breast MRI scans with supine, prone, transferred from prone to supine, and pooled supine and prone MRI were trained and compared using 2D U-Net, 3D U-Net, 2D nnU-Net and 3D nnU-Net. The best performance was 2D models with transfer learning. Our results showed excellent performance and could be used for clinical purposes such as breast registration and computer-aided diagnosis.
Collapse
Affiliation(s)
- Sungwon Ham
- Healthcare Readiness Institute for Unified Korea, Korea University Ansan Hospital, Korea University College of Medicine, 123 Jeokgeum-ro, Danwon-gu, Ansan city, Gyeonggi-do, Republic of Korea
| | - Minjee Kim
- Promedius Inc., 4 Songpa-daero 49-gil, Songpa-gu, Seoul, South Korea
| | - Sangwook Lee
- ANYMEDI Inc., 388-1 Pungnap-dong, Songpa-gu, Seoul, South Korea
| | - Chuan-Bing Wang
- Department of Radiology, First Affiliated Hospital of Nanjing Medical University, 300, Guangzhou Road, Nanjing, Jiangsu, China
| | - BeomSeok Ko
- Department of Breast Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Namkug Kim
- Department of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea.
| |
Collapse
|
2
|
Vairavan R, Abdullah O, Retnasamy PB, Sauli Z, Shahimin MM, Retnasamy V. A Brief Review on Breast Carcinoma and Deliberation on Current Non Invasive Imaging Techniques for Detection. Curr Med Imaging 2020; 15:85-121. [PMID: 31975658 DOI: 10.2174/1573405613666170912115617] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2017] [Revised: 08/27/2017] [Accepted: 08/29/2017] [Indexed: 12/22/2022]
Abstract
BACKGROUND Breast carcinoma is a life threatening disease that accounts for 25.1% of all carcinoma among women worldwide. Early detection of the disease enhances the chance for survival. DISCUSSION This paper presents comprehensive report on breast carcinoma disease and its modalities available for detection and diagnosis, as it delves into the screening and detection modalities with special focus placed on the non-invasive techniques and its recent advancement work done, as well as a proposal on a novel method for the application of early breast carcinoma detection. CONCLUSION This paper aims to serve as a foundation guidance for the reader to attain bird's eye understanding on breast carcinoma disease and its current non-invasive modalities.
Collapse
Affiliation(s)
- Rajendaran Vairavan
- School of Microelectronic Engineering, Universiti Malaysia Perlis, Pauh Putra Campus, 02600 Arau, Perlis, Malaysia
| | - Othman Abdullah
- Hospital Sultan Abdul Halim, 08000 Sg. Petani, Kedah, Malaysia
| | | | - Zaliman Sauli
- School of Microelectronic Engineering, Universiti Malaysia Perlis, Pauh Putra Campus, 02600 Arau, Perlis, Malaysia
| | - Mukhzeer Mohamad Shahimin
- Department of Electrical and Electronic Engineering, Faculty of Engineering, National Defence University of Malaysia (UPNM), Kem Sungai Besi, 57000 Kuala Lumpur, Malaysia
| | - Vithyacharan Retnasamy
- School of Microelectronic Engineering, Universiti Malaysia Perlis, Pauh Putra Campus, 02600 Arau, Perlis, Malaysia
| |
Collapse
|
3
|
Schreier J, Attanasi F, Laaksonen H. Generalization vs. Specificity: In Which Cases Should a Clinic Train its Own Segmentation Models? Front Oncol 2020; 10:675. [PMID: 32477941 PMCID: PMC7241256 DOI: 10.3389/fonc.2020.00675] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Accepted: 04/09/2020] [Indexed: 11/25/2022] Open
Abstract
As artificial intelligence for image segmentation becomes increasingly available, the question whether these solutions generalize between different hospitals and geographies arises. The present study addresses this question by comparing multi-institutional models to site-specific models. Using CT data sets from four clinics for organs-at-risk of the female breast, female pelvis and male pelvis, we differentiate between the effect from population differences and differences in clinical practice. Our study, thus, provides guidelines to hospitals, in which case the training of a custom, hospital-specific deep neural network is to be advised and when a network provided by a third-party can be used. The results show that for the organs of the female pelvis and the heart the segmentation quality is influenced solely on bases of the training set size, while the patient population variability affects the female breast segmentation quality above the effect of the training set size. In the comparison of site-specific contours on the male pelvis, we see that for a sufficiently large data set size, a custom, hospital-specific model outperforms a multi-institutional one on some of the organs. However, for small hospital-specific data sets a multi-institutional model provides the better segmentation quality.
Collapse
Affiliation(s)
- Jan Schreier
- Varian Medical Systems (United States), Palo Alto, CA, United States
| | | | | |
Collapse
|
4
|
Schreier J, Genghi A, Laaksonen H, Morgas T, Haas B. Clinical evaluation of a full-image deep segmentation algorithm for the male pelvis on cone-beam CT and CT. Radiother Oncol 2019; 145:1-6. [PMID: 31869676 DOI: 10.1016/j.radonc.2019.11.021] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Revised: 11/20/2019] [Accepted: 11/21/2019] [Indexed: 11/29/2022]
Abstract
AIM The segmentation of organs from a CT scan is a time-consuming task, which is one hindrance for adaptive radiation therapy. Through deep learning, it is possible to automatically delineate organs. Metrics like dice score do not necessarily represent the impact for clinical practice. Therefore, a clinical evaluation of the deep neural network is needed to verify the segmentation quality. METHODS In this work, a novel deep neural network is trained on 300 CT and 300 artificially generated pseudo CBCTs to segment bladder, prostate, rectum and seminal vesicles from CT and cone beam CT scans. The model is evaluated on 45 CBCT and 5 CT scans through a clinical review performed by three different clinics located in Europe, North America and Australia. RESULTS The deep learning model is scored either equally good (prostate and seminal vesicles) or better (bladder and rectum) than the structures from routine clinical practice. No or minor corrections are required for 97.5% of the segmentations of the bladder, 91.5% of the prostate, 94% of the rectum and seminal vesicles. Overall, for 82.5% of the patients none of the organs need major corrections or a redraw. CONCLUSION This study shows that modern deep neural networks are capable of producing clinically applicable organ segmentation for the male pelvis. The model is able to produce acceptable structures as frequently as current clinical routine. Therefore, deep neural networks can simplify the clinical workflow by offering initial segmentations. The study further shows that to retain the clinicians' personal preferences a structure review and correction is necessary for structures both created by other clinicians and deep neural networks.
Collapse
Affiliation(s)
- Jan Schreier
- Varian Medical Systems, Palo Alto, United States.
| | | | | | | | | |
Collapse
|
5
|
Quantitative Volumetric K-Means Cluster Segmentation of Fibroglandular Tissue and Skin in Breast MRI. J Digit Imaging 2019; 31:425-434. [PMID: 29047034 DOI: 10.1007/s10278-017-0031-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Mammographic breast density (MBD) is the most commonly used method to assess the volume of fibroglandular tissue (FGT). However, MRI could provide a clinically feasible and more accurate alternative. There were three aims in this study: (1) to evaluate a clinically feasible method to quantify FGT with MRI, (2) to assess the inter-rater agreement of MRI-based volumetric measurements and (3) to compare them to measurements acquired using digital mammography and 3D tomosynthesis. This retrospective study examined 72 women (mean age 52.4 ± 12.3 years) with 105 disease-free breasts undergoing diagnostic 3.0-T breast MRI and either digital mammography or tomosynthesis. Two observers analyzed MRI images for breast and FGT volumes and FGT-% from T1-weighted images (0.7-, 2.0-, and 4.0-mm-thick slices) using K-means clustering, data from histogram, and active contour algorithms. Reference values were obtained with Quantra software. Inter-rater agreement for MRI measurements made with 2-mm-thick slices was excellent: for FGT-%, r = 0.994 (95% CI 0.990-0.997); for breast volume, r = 0.985 (95% CI 0.934-0.994); and for FGT volume, r = 0.979 (95% CI 0.958-0.989). MRI-based FGT-% correlated strongly with MBD in mammography (r = 0.819-0.904, P < 0.001) and moderately to high with MBD in tomosynthesis (r = 0.630-0.738, P < 0.001). K-means clustering-based assessments of the proportion of the fibroglandular tissue in the breast at MRI are highly reproducible. In the future, quantitative assessment of FGT-% to complement visual estimation of FGT should be performed on a more regular basis as it provides a component which can be incorporated into the individual's breast cancer risk stratification.
Collapse
|
6
|
Verburg E, Wolterink JM, Waard SN, Išgum I, Gils CH, Veldhuis WB, Gilhuijs KGA. Knowledge‐based and deep learning‐based automated chest wall segmentation in magnetic resonance images of extremely dense breasts. Med Phys 2019; 46:4405-4416. [DOI: 10.1002/mp.13699] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2019] [Revised: 06/21/2019] [Accepted: 06/26/2019] [Indexed: 11/07/2022] Open
Affiliation(s)
- Erik Verburg
- Image Sciences Institute University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Jelmer M. Wolterink
- Image Sciences Institute University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Stephanie N. Waard
- Department of Radiology University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Ivana Išgum
- Image Sciences Institute University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Carla H. Gils
- Julius Center for Health Sciences and Primary Care University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Wouter B. Veldhuis
- Department of Radiology University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Kenneth G. A. Gilhuijs
- Image Sciences Institute University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| |
Collapse
|
7
|
Schreier J, Attanasi F, Laaksonen H. A Full-Image Deep Segmenter for CT Images in Breast Cancer Radiotherapy Treatment. Front Oncol 2019; 9:677. [PMID: 31403032 PMCID: PMC6669791 DOI: 10.3389/fonc.2019.00677] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2019] [Accepted: 07/10/2019] [Indexed: 12/31/2022] Open
Abstract
Radiation therapy is one of the key cancer treatment options. To avoid adverse effects in the healthy tissue, the treatment plan needs to be based on accurate anatomical models of the patient. In this work, an automatic segmentation solution for both female breasts and the heart is constructed using deep learning. Our newly developed deep neural networks perform better than the current state-of-the-art neural networks while improving inference speed by an order of magnitude. While manual segmentation by clinicians takes around 20 min, our automatic segmentation takes less than a second with an average of 3 min manual correction time. Thus, our proposed solution can have a huge impact on the workload of clinical staff and on the standardization of care.
Collapse
Affiliation(s)
- Jan Schreier
- Varian Medical Systems, Palo Alto, CA, United States
| | | | | |
Collapse
|
8
|
Wei D, Weinstein S, Hsieh MK, Pantalone L, Kontos D. Three-Dimensional Whole Breast Segmentation in Sagittal and Axial Breast MRI With Dense Depth Field Modeling and Localized Self-Adaptation for Chest-Wall Line Detection. IEEE Trans Biomed Eng 2019; 66:1567-1579. [PMID: 30334748 PMCID: PMC6684022 DOI: 10.1109/tbme.2018.2875955] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE Whole breast segmentation is an essential task in quantitative analysis of breast MRI for cancer risk assessment. It is challenging, mainly, because the chest-wall line (CWL) can be very difficult to locate due to its spatially varying appearance-caused by both nature and imaging artifacts-and neighboring distracting structures. This paper proposes an automatic three-dimensional (3-D) segmentation method, termed DeepSeA, of whole breast for breast MRI. METHODS DeepSeA distinguishes itself from previous methods in three aspects. First, it reformulates the challenging problem of CWL localization as an equivalent problem that optimizes a smooth depth field and so fully utilizes the CWL's 3-D continuity. Second, it employs a localized self-adapting algorithm to adjust to the CWL's spatial variation. Third, it applies to breast MRI data in both sagittal and axial orientations equally well without training. RESULTS A representative set of 99 breast MRI scans with varying imaging protocols is used for evaluation. Experimental results with expert-outlined reference standard show that DeepSeA can segment breasts accurately: the average Dice similarity coefficients, sensitivity, specificity, and CWL deviation error are 96.04%, 97.27%, 98.77%, and 1.63 mm, respectively. In addition, the configuration of DeepSeA is generalized based on experimental findings, for application to broad prospective data. CONCLUSION A fully automatic method-DeepSeA-for whole breast segmentation in sagittal and axial breast MRI is reported. SIGNIFICANCE DeepSeA can facilitate cancer risk assessment with breast MRI.
Collapse
Affiliation(s)
- Dong Wei
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Susan Weinstein
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Meng-Kang Hsieh
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Lauren Pantalone
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Despina Kontos
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
9
|
Rampun A, Scotney BW, Morrow PJ, Wang H, Winder J. Segmentation of breast MR images using a generalised 2D mathematical model with inflation and deflation forces of active contours. Artif Intell Med 2019; 97:44-60. [DOI: 10.1016/j.artmed.2018.10.007] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2018] [Revised: 09/26/2018] [Accepted: 10/23/2018] [Indexed: 11/28/2022]
|
10
|
Pandey D, Yin X, Wang H, Su MY, Chen JH, Wu J, Zhang Y. Automatic and fast segmentation of breast region-of-interest (ROI) and density in MRIs. Heliyon 2018; 4:e01042. [PMID: 30582055 PMCID: PMC6299131 DOI: 10.1016/j.heliyon.2018.e01042] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Revised: 11/04/2018] [Accepted: 12/10/2018] [Indexed: 12/13/2022] Open
Abstract
Accurate segmentation of the breast region of interest (BROI) and breast density (BD) is a significant challenge during the analysis of breast MR images. Most of the existing methods for breast segmentation are semi-automatic and limited in their ability to achieve accurate results. This is because of difficulties in removing landmarks from noisy magnetic resonance images (MRI) due to similar intensity levels and the close connection to BROI. This study proposes an innovative, fully automatic and fast segmentation approach to identify and remove landmarks such as the heart and pectoral muscles. The BROI segmentation is carried out with a framework consisting of three major steps. Firstly, we use adaptive wiener filtering and k-means clustering to minimize the influence of noises, preserve edges and remove unwanted artefacts. The second step systematically excludes the heart area by utilizing active contour based level sets where initial contour points are determined by the maximum entropy thresholding and convolution method. Finally, a pectoral muscle is removed by using morphological operations and local adaptive thresholding on MR images. Prior to the elimination of the pectoral muscle, the MR image is sub divided into three sections: left, right, and central based on the geometrical information. Subsequently, a BD segmentation is achieved with 4 level fuzzy c-means (FCM) thresholding on the denoised BROI segmentation. The proposed method is validated using the 1350 breast images from 15 female subjects. The pixel-based quantitative analysis showed excellent segmentation results when compared with manually drawn BROI and BD. Furthermore, the presented results in terms of evaluation matrices: Acc, Sp, AUC, MR, P, Se and DSC demonstrate the high quality of segmentations using the proposed method. The average computational time for the segmentation of BROI and BD is 1 minute and 50 seconds.
Collapse
Affiliation(s)
- Dinesh Pandey
- Institute for Sustainable Industries and Liveable Cities, Victoria University, Melbourne, Australia
| | - Xiaoxia Yin
- Cyberspace Institute of Advanced Technology (CIAT), Guangzhou University, Guangzhou 510006, China
| | - Hua Wang
- Institute for Sustainable Industries and Liveable Cities, Victoria University, Melbourne, Australia
| | - Min-Ying Su
- Tu and Yuen Center for Functional Onco-Imaging, Department of Radiological Sciences, University of California, Irvine, CA, United States of America
| | - Jeon-Hor Chen
- Tu and Yuen Center for Functional Onco-Imaging, Department of Radiological Sciences, University of California, Irvine, CA, United States of America
- Department of Radiology, E-Da Hospital and I-Shou University, Kaohsiung, Taiwan
| | - Jianlin Wu
- Department of Radiology, Zhongshan Hospital of Dalian University, Dalian, Liaoning, China
| | - Yanchun Zhang
- Institute for Sustainable Industries and Liveable Cities, Victoria University, Melbourne, Australia
| |
Collapse
|
11
|
Pujara AC, Mikheev A, Rusinek H, Gao Y, Chhor C, Pysarenko K, Rallapalli H, Walczyk J, Moccaldi M, Babb JS, Melsaether AN. Comparison between qualitative and quantitative assessment of background parenchymal enhancement on breast MRI. J Magn Reson Imaging 2017; 47:1685-1691. [PMID: 29140576 DOI: 10.1002/jmri.25895] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Accepted: 10/28/2017] [Indexed: 12/30/2022] Open
Abstract
BACKGROUND Potential clinical implications of the level of background parenchymal enhancement (BPE) on breast MRI are increasing. Currently, BPE is typically evaluated subjectively. Tests of concordance between subjective BPE assessment and computer-assisted quantified BPE have not been reported. PURPOSE OR HYPOTHESIS To compare subjective radiologist assessment of BPE with objective quantified parenchymal enhancement (QPE). STUDY TYPE Cross-sectional observational study. POPULATION Between 7/24/2015 and 11/27/2015, 104 sequential patients (ages 23 - 81 years, mean 49 years) without breast cancer underwent breast MRI and were included in this study. FIELD STRENGTH/SEQUENCE 3T; fat suppressed axial T2, axial T1, and axial fat suppressed T1 before and after intravenous contrast. ASSESSMENT Four breast imagers graded BPE at 90 and 180 s after contrast injection on a 4-point scale (a-d). Fibroglandular tissue masks were generated using a phantom-validated segmentation algorithm, and were co-registered to pre- and postcontrast fat suppressed images to define the region of interest. QPE was calculated. STATISTICAL TESTS Receiver operating characteristic (ROC) analyses and kappa coefficients (k) were used to compare subjective BPE with QPE. RESULTS ROC analyses indicated that subjective BPE at 90 s was best predicted by quantified QPE ≤20.2 = a, 20.3-25.2 = b, 25.3-50.0 = c, >50.0 = d, and at 180 s by quantified QPE ≤ 32.2 = a, 32.3-38.3 = b, 38.4-74.5 = c, >74.5 = d. Agreement between subjective BPE and QPE was slight to fair at 90 s (k = 0.20-0.36) and 180 s (k = 0.19-0.28). At higher levels of QPE, agreement between subjective BPE and QPE significantly decreased for all four radiologists at 90 s (P ≤ 0.004) and for three of four radiologists at 180 s (P ≤ 0.004). DATA CONCLUSION Radiologists were less consistent with QPE as QPE increased. LEVEL OF EVIDENCE 3 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2018;47:1685-1691.
Collapse
Affiliation(s)
- Akshat C Pujara
- Department of Radiology, New York University School of Medicine, New York, New York, USA
| | - Artem Mikheev
- Department of Radiology, New York University School of Medicine, New York, New York, USA.,Center for Biomedical Imaging, New York University School of Medicine, New York, New York, USA
| | - Henry Rusinek
- Department of Radiology, New York University School of Medicine, New York, New York, USA.,Center for Biomedical Imaging, New York University School of Medicine, New York, New York, USA
| | - Yiming Gao
- Department of Radiology, New York University School of Medicine, New York, New York, USA.,Breast Imaging Section, New York University School of Medicine, New York, New York, USA
| | - Chloe Chhor
- Department of Radiology, New York University School of Medicine, New York, New York, USA.,Breast Imaging Section, New York University School of Medicine, New York, New York, USA
| | - Kristine Pysarenko
- Department of Radiology, New York University School of Medicine, New York, New York, USA.,Breast Imaging Section, New York University School of Medicine, New York, New York, USA
| | - Harikrishna Rallapalli
- Center for Biomedical Imaging, New York University School of Medicine, New York, New York, USA
| | - Jerzy Walczyk
- Department of Radiology, New York University School of Medicine, New York, New York, USA.,Center for Biomedical Imaging, New York University School of Medicine, New York, New York, USA
| | - Melanie Moccaldi
- Department of Radiology, New York University School of Medicine, New York, New York, USA.,Perlmutter Cancer Center, New York University School of Medicine, New York, New York, USA
| | - James S Babb
- Department of Radiology, New York University School of Medicine, New York, New York, USA
| | - Amy N Melsaether
- Department of Radiology, New York University School of Medicine, New York, New York, USA.,Breast Imaging Section, New York University School of Medicine, New York, New York, USA
| |
Collapse
|
12
|
Dalmış MU, Litjens G, Holland K, Setio A, Mann R, Karssemeijer N, Gubern-Mérida A. Using deep learning to segment breast and fibroglandular tissue in MRI volumes. Med Phys 2017; 44:533-546. [PMID: 28035663 DOI: 10.1002/mp.12079] [Citation(s) in RCA: 126] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2016] [Revised: 12/19/2016] [Accepted: 12/20/2016] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Automated segmentation of breast and fibroglandular tissue (FGT) is required for various computer-aided applications of breast MRI. Traditional image analysis and computer vision techniques, such atlas, template matching, or, edge and surface detection, have been applied to solve this task. However, applicability of these methods is usually limited by the characteristics of the images used in the study datasets, while breast MRI varies with respect to the different MRI protocols used, in addition to the variability in breast shapes. All this variability, in addition to various MRI artifacts, makes it a challenging task to develop a robust breast and FGT segmentation method using traditional approaches. Therefore, in this study, we investigated the use of a deep-learning approach known as "U-net." MATERIALS AND METHODS We used a dataset of 66 breast MRI's randomly selected from our scientific archive, which includes five different MRI acquisition protocols and breasts from four breast density categories in a balanced distribution. To prepare reference segmentations, we manually segmented breast and FGT for all images using an in-house developed workstation. We experimented with the application of U-net in two different ways for breast and FGT segmentation. In the first method, following the same pipeline used in traditional approaches, we trained two consecutive (2C) U-nets: first for segmenting the breast in the whole MRI volume and the second for segmenting FGT inside the segmented breast. In the second method, we used a single 3-class (3C) U-net, which performs both tasks simultaneously by segmenting the volume into three regions: nonbreast, fat inside the breast, and FGT inside the breast. For comparison, we applied two existing and published methods to our dataset: an atlas-based method and a sheetness-based method. We used Dice Similarity Coefficient (DSC) to measure the performances of the automated methods, with respect to the manual segmentations. Additionally, we computed Pearson's correlation between the breast density values computed based on manual and automated segmentations. RESULTS The average DSC values for breast segmentation were 0.933, 0.944, 0.863, and 0.848 obtained from 3C U-net, 2C U-nets, atlas-based method, and sheetness-based method, respectively. The average DSC values for FGT segmentation obtained from 3C U-net, 2C U-nets, and atlas-based methods were 0.850, 0.811, and 0.671, respectively. The correlation between breast density values based on 3C U-net and manual segmentations was 0.974. This value was significantly higher than 0.957 as obtained from 2C U-nets (P < 0.0001, Steiger's Z-test with Bonferoni correction) and 0.938 as obtained from atlas-based method (P = 0.0016). CONCLUSIONS In conclusion, we applied a deep-learning method, U-net, for segmenting breast and FGT in MRI in a dataset that includes a variety of MRI protocols and breast densities. Our results showed that U-net-based methods significantly outperformed the existing algorithms and resulted in significantly more accurate breast density computation.
Collapse
Affiliation(s)
- Mehmet Ufuk Dalmış
- Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands
| | - Geert Litjens
- Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands
| | - Katharina Holland
- Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands
| | - Arnaud Setio
- Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands
| | - Ritse Mann
- Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands
| | - Nico Karssemeijer
- Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands
| | - Albert Gubern-Mérida
- Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands
| |
Collapse
|
13
|
Pujara AC, Mikheev A, Rusinek H, Rallapalli H, Walczyk J, Gao Y, Chhor C, Pysarenko K, Babb JS, Melsaether AN. Clinical applicability and relevance of fibroglandular tissue segmentation on routine T1 weighted breast MRI. Clin Imaging 2017; 42:119-125. [DOI: 10.1016/j.clinimag.2016.12.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2016] [Revised: 11/07/2016] [Accepted: 12/02/2016] [Indexed: 10/20/2022]
|
14
|
Localized-atlas-based segmentation of breast MRI in a decision-making framework. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2017; 40:69-84. [PMID: 28116639 DOI: 10.1007/s13246-016-0513-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2015] [Accepted: 12/01/2016] [Indexed: 12/27/2022]
Abstract
Breast-region segmentation is an important step for density estimation and Computer-Aided Diagnosis (CAD) systems in Magnetic Resonance Imaging (MRI). Detection of breast-chest wall boundary is often a difficult task due to similarity between gray-level values of fibroglandular tissue and pectoral muscle. This paper proposes a robust breast-region segmentation method which is applicable for both complex cases with fibroglandular tissue connected to the pectoral muscle, and simple cases with high contrast boundaries. We present a decision-making framework based on geometric features and support vector machine (SVM) to classify breasts in two main groups, complex and simple. For complex cases, breast segmentation is done using a combination of intensity-based and atlas-based techniques; however, only intensity-based operation is employed for simple cases. A novel atlas-based method, that is called localized-atlas, accomplishes the processes of atlas construction and registration based on the region of interest (ROI). Atlas-based segmentation is performed by relying on the chest wall template. Our approach is validated using a dataset of 210 cases. Based on similarity between automatic and manual segmentation results, the proposed method achieves Dice similarity coefficient, Jaccard coefficient, total overlap, false negative, and false positive values of 96.3, 92.9, 97.4, 2.61 and 4.77%, respectively. The localization error of the breast-chest wall boundary is 1.97 mm, in terms of averaged deviation distance. The achieved results prove that the suggested framework performs the breast segmentation with negligible errors and efficient computational time for different breasts from the viewpoints of size, shape, and density pattern.
Collapse
|
15
|
Principles and methods for automatic and semi-automatic tissue segmentation in MRI data. MAGNETIC RESONANCE MATERIALS IN PHYSICS BIOLOGY AND MEDICINE 2016; 29:95-110. [DOI: 10.1007/s10334-015-0520-5] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2015] [Revised: 12/09/2015] [Accepted: 12/10/2015] [Indexed: 11/26/2022]
|