1
|
Wang Y, Luo L, Wu M, Wang Q, Chen H. Learning robust medical image segmentation from multi-source annotations. Med Image Anal 2025; 101:103489. [PMID: 39933334 DOI: 10.1016/j.media.2025.103489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 11/02/2024] [Accepted: 01/28/2025] [Indexed: 02/13/2025]
Abstract
Collecting annotations from multiple independent sources could mitigate the impact of potential noises and biases from a single source, which is a common practice in medical image segmentation. However, learning segmentation networks from multi-source annotations remains a challenge due to the uncertainties brought by the variance of the annotations. In this paper, we proposed an Uncertainty-guided Multi-source Annotation Network (UMA-Net), which guided the training process by uncertainty estimation at both the pixel and the image levels. First, we developed an annotation uncertainty estimation module (AUEM) to estimate the pixel-wise uncertainty of each annotation, which then guided the network to learn from reliable pixels by a weighted segmentation loss. Second, a quality assessment module (QAM) was proposed to assess the image-level quality of the input samples based on the former estimated annotation uncertainties. Furthermore, instead of discarding the low-quality samples, we introduced an auxiliary predictor to learn from them and thus ensured the preservation of their representation knowledge in the backbone without directly accumulating errors within the primary predictor. Extensive experiments demonstrated the effectiveness and feasibility of our proposed UMA-Net on various datasets, including 2D chest X-ray segmentation dataset, 2D fundus image segmentation dataset, 3D breast DCE-MRI segmentation dataset, and the QUBIQ multi-task segmentation dataset. Code will be released at https://github.com/wangjin2945/UMA-Net.
Collapse
Affiliation(s)
- Yifeng Wang
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Luyang Luo
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | | | - Qiong Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China; Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China; Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong, China; State Key Laboratory of Molecular Neuroscience, The Hong Kong University of Science and Technology, Hong Kong, China.
| |
Collapse
|
2
|
Chen M, Xing J, Guo L. MRI-based Deep Learning Models for Preoperative Breast Volume and Density Assessment Assisting Breast Reconstruction. Aesthetic Plast Surg 2024; 48:4994-5006. [PMID: 38806828 DOI: 10.1007/s00266-024-04074-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Accepted: 04/09/2024] [Indexed: 05/30/2024]
Abstract
BACKGROUND The volume of the implant is the most critical element of breast reconstruction, so it is necessary to accurately assess the preoperative volume of the healthy and affected breasts and select the appropriate implant for placement. Accurate and automated methods for quantitative assessment of breast volume can optimize breast reconstruction surgery and assist physicians in clinical decision making. The aim of this study was to develop an artificial intelligence model for automated segmentation of the breast and measurement of volume. MATERIAL AND METHODS A total of 249 subjects undergoing breast reconstruction surgery were enrolled in this study. Subjects underwent preoperative breast MRI, and the breast region manually outlined by the imaging physician served as the gold standard for volume measurement by the automated segmentation model. In this study, we developed three automated algorithms for automatic segmentation of breast regions, including a simple alignment model, an alignment dynamic encoding model, and a deep learning model. The volumetric agreement between the three automated segmentation algorithms and the breast regions manually segmented by imaging physicians was evaluated by calculating the mean square error (MSE) and intragroup correlation coefficient (ICC), and the reproducibility of the automated segmentation of the breast regions was assessed by the test-retest step. RESULTS The three breast automated segmentation models developed in this study (simple registration model, dynamic programming model, and deep learning model) showed strong ICC with manual segmentation of the breast region, with MSEs of 1.124, 0.693, and 0.781, and ICCs of 0.975 (95% CI, 0.869-0.991), 0.986 (95% CI, 0.967-0.996), and 0.983 (95% CI, 0.961-0.992), respectively. Regarding the test-retest results of breast volume, the dynamic programming model performed the best with an MSE of 0.370 and an ICC of 0.993 (95% CI, 0.982-0.997), followed by the deep learning algorithm with an MSE of 0.741 and an ICC of 0.983 (95% CI, 0.956-0.993), and the simple registration algorithm with an MSE of 0.763 and an ICC of 0.982 (95% CI, 0.949-0.993). The reproducibility of the breast region segmented by the three automated algorithms was higher than that of manual segmentation by different radiologists. CONCLUSION The three automated breast segmentation algorithms developed in this study generate accurate and reliable breast regions, enable highly reproducible breast region segmentation and automated volume measurements, and provide a valuable tool for surgical selection of appropriate prostheses. NO LEVEL ASSIGNED This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Collapse
Affiliation(s)
- Muzi Chen
- Department of Plastic and Reconstructive Surgery, The First Medical Center, Chinese PLA General Hospital, Beijing, 100853, China
| | - Jiahua Xing
- Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, 33 Badachu Road, Shijingshan District, Beijing, 100144, China
| | - Lingli Guo
- Department of Plastic and Reconstructive Surgery, The First Medical Center, Chinese PLA General Hospital, Beijing, 100853, China.
| |
Collapse
|
3
|
Comes MC, Fanizzi A, Bove S, Boldrini L, Latorre A, Guven DC, Iacovelli S, Talienti T, Rizzo A, Zito FA, Massafra R. Monitoring Over Time of Pathological Complete Response to Neoadjuvant Chemotherapy in Breast Cancer Patients Through an Ensemble Vision Transformers-Based Model. Cancer Med 2024; 13:e70482. [PMID: 39692281 DOI: 10.1002/cam4.70482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Revised: 11/15/2024] [Accepted: 11/28/2024] [Indexed: 12/19/2024] Open
Abstract
BACKGROUND Morphological and vascular characteristics of breast cancer can change during neoadjuvant chemotherapy (NAC). Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI)-acquired pre- and mid-treatment quantitatively capture information about tumor heterogeneity as potential earlier indicators of pathological complete response (pCR) to NAC in breast cancer. AIMS This study aimed to develop an ensemble deep learning-based model, exploiting a Vision Transformer (ViT) architecture, which merges features automatically extracted from five segmented slices of both pre- and mid-treatment exams containing the maximum tumor area, to predict and monitor pCR to NAC. MATERIALS AND METHODS Imaging data analyzed in this study referred to a cohort of 86 breast cancer patients, randomly split into training and test sets at a ratio of 8:2, who underwent NAC and for which information regarding the pCR status was available (37.2% of patients achieved pCR). We further validated our model using a subset of 20 patients selected from the publicly available I-SPY2 trial dataset (independent test). RESULTS The performances of the proposed model were assessed using standard evaluation metrics, and promising results were achieved: area under the curve (AUC) value of 91.4%, accuracy value of 82.4%, a specificity value of 80.0%, a sensitivity value of 85.7%, precision value of 75.0%, F-score value of 80.0%, and G-mean value of 82.8%. The results obtained from the independent test show an AUC of 81.3%, an accuracy of 80.0%, a specificity value of 76.9%, a sensitivity of 85.0%, a precision of 66.7%, an F-score of 75.0%, and a G-mean of 81.2%. DISCUSSION As far as we know, our research is the first proposal using ViTs on DCE-MRI exams to monitor pCR over time during NAC. CONCLUSION Finally, the changes in DCE-MRI at pre- and mid-treatment could affect the accuracy of pCR prediction to NAC.
Collapse
Affiliation(s)
- Maria Colomba Comes
- Laboratorio di Biostatistica e Bioinformatica, I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Bari, Italy
| | - Annarita Fanizzi
- Laboratorio di Biostatistica e Bioinformatica, I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Bari, Italy
| | - Samantha Bove
- Laboratorio di Biostatistica e Bioinformatica, I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Bari, Italy
| | - Luca Boldrini
- Unità Operativa Complessa di Radioterapia Oncologica, Fondazione Policlinico Universitario Agostino Gemelli I.R.C.C.S, Rome, Italy
| | - Agnese Latorre
- Unità Operativa Complessa di Oncologia Medica, I.R.C.C.S. Istituto Tumori "Giovanni Paolo II"Bari, Bari, Italy
| | - Deniz Can Guven
- Department of Medical Oncology, Hacettepe University, Cancer Institute, Ankara, Turkey
| | - Serena Iacovelli
- Trial Office, I.R.C.C.S. Istituto Tumori "Giovanni Paolo II" Bari, Bari, Italy
| | - Tiziana Talienti
- Unità Operativa Complessa di Oncologia Medica, I.R.C.C.S. Istituto Tumori "Giovanni Paolo II"Bari, Bari, Italy
| | - Alessandro Rizzo
- Struttura Semplice Dipartimentale di Oncologia Medica per la Presa in Carico Globale del Paziente Oncologico "Don Tonino Bello", I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Bari, Italy
| | - Francesco Alfredo Zito
- Unità Operativa Complessa di Anatomia Patologica, I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Bari, Italy
| | - Raffaella Massafra
- Laboratorio di Biostatistica e Bioinformatica, I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Bari, Italy
| |
Collapse
|
4
|
Comes MC, Fanizzi A, Bove S, Didonna V, Diotiaiuti S, Fadda F, La Forgia D, Giotta F, Latorre A, Nardone A, Palmiotti G, Ressa CM, Rinaldi L, Rizzo A, Talienti T, Tamborra P, Zito A, Lorusso V, Massafra R. Explainable 3D CNN based on baseline breast DCE-MRI to give an early prediction of pathological complete response to neoadjuvant chemotherapy. Comput Biol Med 2024; 172:108132. [PMID: 38508058 DOI: 10.1016/j.compbiomed.2024.108132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 01/29/2024] [Accepted: 02/12/2024] [Indexed: 03/22/2024]
Abstract
BACKGROUND So far, baseline Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has played a key role for the application of sophisticated artificial intelligence-based models using Convolutional Neural Networks (CNNs) to extract quantitative imaging information as earlier indicators of pathological Complete Response (pCR) achievement in breast cancer patients treated with neoadjuvant chemotherapy (NAC). However, these models did not exploit the DCE-MRI exams in their full geometry as 3D volume but analysed only few individual slices independently, thus neglecting the depth information. METHOD This study aimed to develop an explainable 3D CNN, which fulfilled the task of pCR prediction before the beginning of NAC, by leveraging the 3D information of post-contrast baseline breast DCE-MRI exams. Specifically, for each patient, the network took in input a 3D sequence containing the tumor region, which was previously automatically identified along the DCE-MRI exam. A visual explanation of the decision-making process of the network was also provided. RESULTS To the best of our knowledge, our proposal is competitive than other models in the field, which made use of imaging data alone, reaching a median AUC value of 81.8%, 95%CI [75.3%; 88.3%], a median accuracy value of 78.7%, 95%CI [74.8%; 82.5%], a median sensitivity value of 69.8%, 95%CI [59.6%; 79.9%] and a median specificity value of 83.3%, 95%CI [82.6%; 84.0%], respectively. The median and CIs were computed according to a 10-fold cross-validation scheme for 5 rounds. CONCLUSION Finally, this proposal holds high potential to support clinicians on non-invasively early pursuing or changing patient-centric NAC pathways.
Collapse
Affiliation(s)
- Maria Colomba Comes
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy
| | - Annarita Fanizzi
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy.
| | - Samantha Bove
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy.
| | - Vittorio Didonna
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy
| | - Sergio Diotiaiuti
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy
| | - Federico Fadda
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy
| | - Daniele La Forgia
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy
| | - Francesco Giotta
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy
| | - Agnese Latorre
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy
| | - Annalisa Nardone
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy
| | - Gennaro Palmiotti
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy
| | - Cosmo Maurizio Ressa
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy
| | - Lucia Rinaldi
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy
| | - Alessandro Rizzo
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy
| | - Tiziana Talienti
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy
| | - Pasquale Tamborra
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy
| | - Alfredo Zito
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy
| | - Vito Lorusso
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy
| | - Raffaella Massafra
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", Viale Orazio Flacco 65, 70124, Bari, Italy
| |
Collapse
|
5
|
Liu H, Wei D, Lu D, Tang X, Wang L, Zheng Y. Simultaneous alignment and surface regression using hybrid 2D-3D networks for 3D coherent layer segmentation of retinal OCT images with full and sparse annotations. Med Image Anal 2024; 91:103019. [PMID: 37944431 DOI: 10.1016/j.media.2023.103019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/28/2023] [Accepted: 10/31/2023] [Indexed: 11/12/2023]
Abstract
Layer segmentation is important to quantitative analysis of retinal optical coherence tomography (OCT). Recently, deep learning based methods have been developed to automate this task and yield remarkable performance. However, due to the large spatial gap and potential mismatch between the B-scans of an OCT volume, all of them were based on 2D segmentation of individual B-scans, which may lose the continuity and diagnostic information of the retinal layers in 3D space. Besides, most of these methods required dense annotation of the OCT volumes, which is labor-intensive and expertise-demanding. This work presents a novel framework based on hybrid 2D-3D convolutional neural networks (CNNs) to obtain continuous 3D retinal layer surfaces from OCT volumes, which works well with both full and sparse annotations. The 2D features of individual B-scans are extracted by an encoder consisting of 2D convolutions. These 2D features are then used to produce the alignment displacement vectors and layer segmentation by two 3D decoders coupled via a spatial transformer module. Two losses are proposed to utilize the retinal layers' natural property of being smooth for B-scan alignment and layer segmentation, respectively, and are the key to the semi-supervised learning with sparse annotation. The entire framework is trained end-to-end. To the best of our knowledge, this is the first work that attempts 3D retinal layer segmentation in volumetric OCT images based on CNNs. Experiments on a synthetic dataset and three public clinical datasets show that our framework can effectively align the B-scans for potential motion correction, and achieves superior performance to state-of-the-art 2D deep learning methods in terms of both layer segmentation accuracy and cross-B-scan 3D continuity in both fully and semi-supervised settings, thus offering more clinical values than previous works.
Collapse
Affiliation(s)
- Hong Liu
- School of Informatics, Xiamen University, Xiamen 361005, China; National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen 361005, China; Jarvis Research Center, Tencent YouTu Lab, Shenzhen 518075, China
| | - Dong Wei
- Jarvis Research Center, Tencent YouTu Lab, Shenzhen 518075, China
| | - Donghuan Lu
- Jarvis Research Center, Tencent YouTu Lab, Shenzhen 518075, China
| | - Xiaoying Tang
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Liansheng Wang
- School of Informatics, Xiamen University, Xiamen 361005, China.
| | - Yefeng Zheng
- Jarvis Research Center, Tencent YouTu Lab, Shenzhen 518075, China
| |
Collapse
|
6
|
Ying J, Cattell R, Zhao T, Lei L, Jiang Z, Hussain SM, Gao Y, Chow HHS, Stopeck AT, Thompson PA, Huang C. Two fully automated data-driven 3D whole-breast segmentation strategies in MRI for MR-based breast density using image registration and U-Net with a focus on reproducibility. Vis Comput Ind Biomed Art 2022; 5:25. [PMID: 36219359 PMCID: PMC9554077 DOI: 10.1186/s42492-022-00121-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 09/21/2022] [Indexed: 11/07/2022] Open
Abstract
Presence of higher breast density (BD) and persistence over time are risk factors for breast cancer. A quantitatively accurate and highly reproducible BD measure that relies on precise and reproducible whole-breast segmentation is desirable. In this study, we aimed to develop a highly reproducible and accurate whole-breast segmentation algorithm for the generation of reproducible BD measures. Three datasets of volunteers from two clinical trials were included. Breast MR images were acquired on 3 T Siemens Biograph mMR, Prisma, and Skyra using 3D Cartesian six-echo GRE sequences with a fat-water separation technique. Two whole-breast segmentation strategies, utilizing image registration and 3D U-Net, were developed. Manual segmentation was performed. A task-based analysis was performed: a previously developed MR-based BD measure, MagDensity, was calculated and assessed using automated and manual segmentation. The mean squared error (MSE) and intraclass correlation coefficient (ICC) between MagDensity were evaluated using the manual segmentation as a reference. The test-retest reproducibility of MagDensity derived from different breast segmentation methods was assessed using the difference between the test and retest measures (Δ2-1), MSE, and ICC. The results showed that MagDensity derived by the registration and deep learning segmentation methods exhibited high concordance with manual segmentation, with ICCs of 0.986 (95%CI: 0.974-0.993) and 0.983 (95%CI: 0.961-0.992), respectively. For test-retest analysis, MagDensity derived using the registration algorithm achieved the smallest MSE of 0.370 and highest ICC of 0.993 (95%CI: 0.982-0.997) when compared to other segmentation methods. In conclusion, the proposed registration and deep learning whole-breast segmentation methods are accurate and reliable for estimating BD. Both methods outperformed a previously developed algorithm and manual segmentation in the test-retest assessment, with the registration exhibiting superior performance for highly reproducible BD measurements.
Collapse
Affiliation(s)
- Jia Ying
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Renee Cattell
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794, USA
- Department of Radiation Oncology, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Tianyun Zhao
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Lan Lei
- Department of Medicine, Northside Hospital Gwinnett, Lawrenceville, GA, 30046, USA
- Program of Public Health, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Zhao Jiang
- Department of Radiology, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Shahid M Hussain
- Department of Radiology, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Yi Gao
- Department of Biomedical Informatics, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | | | - Alison T Stopeck
- Department of Medicine, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
- Stony Brook Cancer Center, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Patricia A Thompson
- Stony Brook Cancer Center, Stony Brook University, Stony Brook, NY, 11794, USA
- Department of Medicine, Cedar Sinai Cancer, Cedars Sinai Medical Center, Los Angeles, CA, 90048, USA
| | - Chuan Huang
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794, USA.
- Department of Radiology, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA.
- Stony Brook Cancer Center, Stony Brook University, Stony Brook, NY, 11794, USA.
| |
Collapse
|
7
|
Shim S, Cester D, Ruby L, Bluethgen C, Marcon M, Berger N, Unkelbach J, Boss A. Fully automated breast segmentation on spiral breast computed tomography images. J Appl Clin Med Phys 2022; 23:e13726. [PMID: 35946049 PMCID: PMC9588268 DOI: 10.1002/acm2.13726] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 03/10/2022] [Accepted: 06/24/2022] [Indexed: 11/10/2022] Open
Abstract
Introduction The quantification of the amount of the glandular tissue and breast density is important to assess breast cancer risk. Novel photon‐counting breast computed tomography (CT) technology has the potential to quantify them. For accurate analysis, a dedicated method to segment the breast components—the adipose and glandular tissue, skin, pectoralis muscle, skinfold section, rib, and implant—is required. We propose a fully automated breast segmentation method for breast CT images. Methods The framework consists of four parts: (1) investigate, (2) segment the components excluding adipose and glandular tissue, (3) assess the breast density, and (4) iteratively segment the glandular tissue according to the estimated density. For the method, adapted seeded watershed and region growing algorithm were dedicatedly developed for the breast CT images and optimized on 68 breast images. The segmentation performance was qualitatively (five‐point Likert scale) and quantitatively (Dice similarity coefficient [DSC] and difference coefficient [DC]) demonstrated according to human reading by experienced radiologists. Results The performance evaluation on each component and overall segmentation for 17 breast CT images resulted in DSCs ranging 0.90–0.97 and in DCs 0.01–0.08. The readers rated 4.5–4.8 (5 highest score) with an excellent inter‐reader agreement. The breast density varied by 3.7%–7.1% when including mis‐segmented muscle or skin. Conclusion The automatic segmentation results coincided with the human expert's reading. The accurate segmentation is important to avoid the significant bias in breast density analysis. Our method enables accurate quantification of the breast density and amount of the glandular tissue that is directly related to breast cancer risk.
Collapse
Affiliation(s)
- Sojin Shim
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| | - Davide Cester
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| | - Lisa Ruby
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| | - Christian Bluethgen
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| | - Magda Marcon
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| | - Nicole Berger
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| | - Jan Unkelbach
- Department of Radiation Oncology, University Hospital of Zurich, Zurich, Switzerland
| | - Andreas Boss
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| |
Collapse
|
8
|
Joint Transformer and Multi-scale CNN for DCE-MRI Breast Cancer Segmentation. Soft comput 2022. [DOI: 10.1007/s00500-022-07235-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
AbstractAutomatic segmentation of breast cancer lesions in dynamic contrast-enhanced magnetic resonance imaging is challenged by low accuracy of delineation of the infiltration area, variable structure and shapes, large intensity heterogeneity changes, and low boundary contrast. This study constructed a two-stage breast cancer image segmentation framework and proposes a novel breast cancer lesion segmentation model (TR-IMUnet). The benchmark U-Net network model enables a rough delineation of the breast area in the acquired images and eliminates the influence of unrelated tissues (chest muscle, fat, and heart) on breast tumor segmentation. Based on the extracted results of the region of interest, the rectified linear unit (ReLU) function of the encoding–decoding structure in the model was replaced by an improved ReLU function to reserve and adjust the data dynamically according to input information. The segmentation accuracy of breast cancer lesions was improved by embedding a multi-scale fusion block and a transformer module in the coding path of the model, thereby obtaining multi-scale and global attention information. The experimental results showed that the breast tumor segmentation indexes Dice coefficient (Dice), Intersection over Union (IoU), Sensitivity (SEN), and Positive Predictive Value (PPV) increased by 4.27, 5.21, 3.37, and 3.68%, respectively, relative to the U-Net reference model. The proposed model improves the segmentation results of breast cancer lesions and reduces small area mis-segmentation and calcification segmentation.
Collapse
|
9
|
Wei D, Jahani N, Cohen E, Weinstein S, Hsieh MK, Pantalone L, Kontos D. Fully automatic quantification of fibroglandular tissue and background parenchymal enhancement with accurate implementation for axial and sagittal breast MRI protocols. Med Phys 2020; 48:238-252. [PMID: 33150617 DOI: 10.1002/mp.14581] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Revised: 10/05/2020] [Accepted: 10/23/2020] [Indexed: 01/03/2023] Open
Abstract
PURPOSE To propose and evaluate a fully automated technique for quantification of fibroglandular tissue (FGT) and background parenchymal enhancement (BPE) in breast MRI. METHODS We propose a fully automated method, where after preprocessing, FGT is segmented in T1-weighted, nonfat-saturated MRI. Incorporating an anatomy-driven prior probability for FGT and robust texture descriptors against intensity variations, our method effectively addresses major image processing challenges, including wide variations in breast anatomy and FGT appearance among individuals. Our framework then propagates this segmentation to dynamic contrast-enhanced (DCE)-MRI to quantify BPE within the segmented FGT regions. Axial and sagittal image data from 40 cancer-unaffected women were used to evaluate our proposed method vs a manually annotated reference standard. RESULTS High spatial correspondence was observed between the automatic and manual FGT segmentation (mean Dice similarity coefficient 81.14%). The FGT and BPE quantifications (denoted FGT% and BPE%) indicated high correlation (Pearson's r = 0.99 for both) between automatic and manual segmentations. Furthermore, the differences between the FGT% and BPE% quantified using automatic and manual segmentations were low (mean differences: -0.66 ± 2.91% for FGT% and -0.17 ± 1.03% for BPE%). When correlated with qualitative clinical BI-RADS ratings, the correlation coefficient for FGT% was still high (Spearman's ρ = 0.92), whereas that for BPE was lower (ρ = 0.65). Our proposed approach also performed significantly better than a previously validated method for sagittal breast MRI. CONCLUSIONS Our method demonstrated accurate fully automated quantification of FGT and BPE in both sagittal and axial breast MRI. Our results also suggested the complexity of BPE assessment, demonstrating relatively low correlation between segmentation and clinical rating.
Collapse
Affiliation(s)
- Dong Wei
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA.,Tencent Jarvis Lab, Shenzhen, Guangdong, 518057, China
| | - Nariman Jahani
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Eric Cohen
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Susan Weinstein
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Meng-Kang Hsieh
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Lauren Pantalone
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Despina Kontos
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|
10
|
Automated mammogram breast cancer detection using the optimized combination of convolutional and recurrent neural network. EVOLUTIONARY INTELLIGENCE 2020. [DOI: 10.1007/s12065-020-00403-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
11
|
Wei D, Weinstein S, Hsieh MK, Pantalone L, Kontos D. Three-Dimensional Whole Breast Segmentation in Sagittal and Axial Breast MRI With Dense Depth Field Modeling and Localized Self-Adaptation for Chest-Wall Line Detection. IEEE Trans Biomed Eng 2019; 66:1567-1579. [PMID: 30334748 PMCID: PMC6684022 DOI: 10.1109/tbme.2018.2875955] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE Whole breast segmentation is an essential task in quantitative analysis of breast MRI for cancer risk assessment. It is challenging, mainly, because the chest-wall line (CWL) can be very difficult to locate due to its spatially varying appearance-caused by both nature and imaging artifacts-and neighboring distracting structures. This paper proposes an automatic three-dimensional (3-D) segmentation method, termed DeepSeA, of whole breast for breast MRI. METHODS DeepSeA distinguishes itself from previous methods in three aspects. First, it reformulates the challenging problem of CWL localization as an equivalent problem that optimizes a smooth depth field and so fully utilizes the CWL's 3-D continuity. Second, it employs a localized self-adapting algorithm to adjust to the CWL's spatial variation. Third, it applies to breast MRI data in both sagittal and axial orientations equally well without training. RESULTS A representative set of 99 breast MRI scans with varying imaging protocols is used for evaluation. Experimental results with expert-outlined reference standard show that DeepSeA can segment breasts accurately: the average Dice similarity coefficients, sensitivity, specificity, and CWL deviation error are 96.04%, 97.27%, 98.77%, and 1.63 mm, respectively. In addition, the configuration of DeepSeA is generalized based on experimental findings, for application to broad prospective data. CONCLUSION A fully automatic method-DeepSeA-for whole breast segmentation in sagittal and axial breast MRI is reported. SIGNIFICANCE DeepSeA can facilitate cancer risk assessment with breast MRI.
Collapse
Affiliation(s)
- Dong Wei
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Susan Weinstein
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Meng-Kang Hsieh
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Lauren Pantalone
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Despina Kontos
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|