1
|
Guo Y, Li N, Song C, Yang J, Quan Y, Zhang H. Artificial intelligence-based automated breast ultrasound radiomics for breast tumor diagnosis and treatment: a narrative review. Front Oncol 2025; 15:1578991. [PMID: 40406239 PMCID: PMC12095238 DOI: 10.3389/fonc.2025.1578991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2025] [Accepted: 04/14/2025] [Indexed: 05/26/2025] Open
Abstract
Breast cancer (BC) is the most common malignant tumor among women worldwide, posing a substantial threat to their health and overall quality of life. Consequently, for early-stage BC, timely screening, accurate diagnosis, and the development of personalized treatment strategies are crucial for enhancing patient survival rates. Automated Breast Ultrasound (ABUS) addresses the limitations of traditional handheld ultrasound (HHUS), such as operator dependency and inter-observer variability, by providing a more comprehensive and standardized approach to BC detection and diagnosis. Radiomics, an emerging field, focuses on extracting high-dimensional quantitative features from medical imaging data and utilizing them to construct predictive models for disease diagnosis, prognosis, and treatment evaluation. In recent years, the integration of artificial intelligence (AI) with radiomics has significantly enhanced the process of analyzing and extracting meaningful features from large and complex radiomic datasets through the application of machine learning (ML) and deep learning (DL) algorithms. Recently, AI-based ABUS radiomics has demonstrated significant potential in the diagnosis and therapeutic evaluation of BC. However, despite the notable performance and application potential of ML and DL models based on ABUS, the inherent variability in the analyzed data highlights the need for further evaluation of these models to ensure their reliability in clinical applications.
Collapse
Affiliation(s)
- Yinglin Guo
- Faculty of Life Science and Technology & The Affiliated Anning First People’s Hospital, Kunming University of Science and Technology, Kunming, China
| | - Ning Li
- Department of Radiology, Faculty of Life Science and Technology & The Affiliated Anning First People's Hospital, Kunming University of Science and Technology, Kunming, China
| | - Chonghui Song
- Faculty of Life Science and Technology & The Affiliated Anning First People’s Hospital, Kunming University of Science and Technology, Kunming, China
| | - Juan Yang
- Faculty of Life Science and Technology & The Affiliated Anning First People’s Hospital, Kunming University of Science and Technology, Kunming, China
| | - Yinglan Quan
- Faculty of Life Science and Technology & The Affiliated Anning First People’s Hospital, Kunming University of Science and Technology, Kunming, China
| | - Hongjiang Zhang
- Department of Radiology, Faculty of Life Science and Technology & The Affiliated Anning First People's Hospital, Kunming University of Science and Technology, Kunming, China
| |
Collapse
|
2
|
Yan L, Li Q, Fu K, Zhou X, Zhang K. Progress in the Application of Artificial Intelligence in Ultrasound-Assisted Medical Diagnosis. Bioengineering (Basel) 2025; 12:288. [PMID: 40150752 PMCID: PMC11939760 DOI: 10.3390/bioengineering12030288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2025] [Revised: 03/07/2025] [Accepted: 03/12/2025] [Indexed: 03/29/2025] Open
Abstract
The integration of artificial intelligence (AI) into ultrasound medicine has revolutionized medical imaging, enhancing diagnostic accuracy and clinical workflows. This review focuses on the applications, challenges, and future directions of AI technologies, particularly machine learning (ML) and its subset, deep learning (DL), in ultrasound diagnostics. By leveraging advanced algorithms such as convolutional neural networks (CNNs), AI has significantly improved image acquisition, quality assessment, and objective disease diagnosis. AI-driven solutions now facilitate automated image analysis, intelligent diagnostic assistance, and medical education, enabling precise lesion detection across various organs while reducing physician workload. AI's error detection capabilities further enhance diagnostic accuracy. Looking ahead, the integration of AI with ultrasound is expected to deepen, promoting trends in standardization, personalized treatment, and intelligent healthcare, particularly in underserved areas. Despite its potential, comprehensive assessments of AI's diagnostic accuracy and ethical implications remain limited, necessitating rigorous evaluations to ensure effectiveness in clinical practice. This review provides a systematic evaluation of AI technologies in ultrasound medicine, highlighting their transformative potential to improve global healthcare outcomes.
Collapse
Affiliation(s)
- Li Yan
- Institute of Medical Research, Northwestern Polytechnical University, Xi’an 710072, China; (L.Y.); (K.F.)
| | - Qing Li
- Ultrasound Diagnosis & Treatment Center, Xi’an International Medical Center Hospital, Xi’an 710100, China
| | - Kang Fu
- Institute of Medical Research, Northwestern Polytechnical University, Xi’an 710072, China; (L.Y.); (K.F.)
| | - Xiaodong Zhou
- Ultrasound Diagnosis & Treatment Center, Xi’an International Medical Center Hospital, Xi’an 710100, China
| | - Kai Zhang
- Department of Dermatology and Aesthetic Plastic Surgery, Xi’an No. 3 Hospital, The Affiliated Hospital of Northwest University, Xi’an 718000, China
| |
Collapse
|
3
|
Li L, Niu Y, Tian F, Huang B. An efficient deep learning strategy for accurate and automated detection of breast tumors in ultrasound image datasets. Front Oncol 2025; 14:1461542. [PMID: 40098633 PMCID: PMC11911202 DOI: 10.3389/fonc.2024.1461542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2024] [Accepted: 12/31/2024] [Indexed: 03/19/2025] Open
Abstract
Background Breast cancer ranks as one of the leading malignant tumors among women worldwide in terms of incidence and mortality. Ultrasound examination is a critical method for breast cancer screening and diagnosis in China. However, conventional breast ultrasound examinations are time-consuming and labor-intensive, necessitating the development of automated and efficient detection models. Methods We developed a novel approach based on an improved deep learning model for the intelligent auxiliary diagnosis of breast tumors. Combining an optimized U2NET-Lite model with the efficient DeepCardinal-50 model, this method demonstrates superior accuracy and efficiency in the precise segmentation and classification of breast ultrasound images compared to traditional deep learning models such as ResNet and AlexNet. Results Our proposed model demonstrated exceptional performance in experimental test sets. For segmentation, the U2NET-Lite model processed breast cancer images with an accuracy of 0.9702, a recall of 0.7961, and an IoU of 0.7063. In classification, the DeepCardinal-50 model excelled, achieving higher accuracy and AUC values compared to other models. Specifically, ResNet-50 achieved accuracies of 0.78 for benign, 0.67 for malignant, and 0.73 for normal cases, while DeepCardinal-50 achieved 0.76, 0.63, and 0.90 respectively. These results highlight our model's superior capability in breast tumor identification and classification. Conclusion The automatic detection of benign and malignant breast tumors using deep learning can rapidly and accurately identify breast tumor types at an early stage, which is crucial for the early diagnosis and treatment of malignant breast tumors.
Collapse
Affiliation(s)
- Luyao Li
- Department of Ultrasound, Zhejiang Hospital, Hangzhou, China
| | - Yupeng Niu
- College of Information Engineering, Sichuan Agricultural University, Ya' an, China
| | - Fa Tian
- College of Information Engineering, Sichuan Agricultural University, Ya' an, China
| | - Bin Huang
- Department of Ultrasound, Zhejiang Hospital, Hangzhou, China
| |
Collapse
|
4
|
Jiang X, Chen C, Yao J, Wang L, Yang C, Li W, Ou D, Jin Z, Liu Y, Peng C, Wang Y, Xu D. A nomogram for diagnosis of BI-RADS 4 breast nodules based on three-dimensional volume ultrasound. BMC Med Imaging 2025; 25:48. [PMID: 39953395 PMCID: PMC11829536 DOI: 10.1186/s12880-025-01580-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 02/03/2025] [Indexed: 02/17/2025] Open
Abstract
OBJECTIVES The classification of malignant breast nodules into four categories according to the Breast Imaging Reporting and Data System (BI-RADS) presents significant variability, posing challenges in clinical diagnosis. This study investigates whether a nomogram prediction model incorporating automated breast ultrasound system (ABUS) can improve the accuracy of differentiating benign and malignant BI-RADS 4 breast nodules. METHODS Data were collected for a total of 257 nodules with breast nodules corresponding to BI-RADS 4 who underwent ABUS examination and for whom pathology results were obtained from January 2019 to August 2022. The participants were divided into a benign group (188 cases) and a malignant group (69 cases) using a retrospective study method. Ultrasound imaging features were recorded. Logistic regression analysis was used to screen the clinical and ultrasound characteristics. Using the results of these analyses, a nomogram prediction model was established accordingly. RESULTS Age, distance between nodule and nipple, calcification and C-plane convergence sign were independent risk factors that enabled differentiation between benign and malignant breast nodules (all P < 0.05). A nomogram model was established based on these variables. The area under curve (AUC) values for the nomogram model, age, distance between nodule and nipple, calcification, and C-plane convergence sign were 0.86, 0.735, 0.645, 0.697, and 0.685, respectively. Thus, the AUC value for the model was significantly higher than a single variable. CONCLUSIONS A nomogram based on the clinical and ultrasound imaging features of ABUS can be used to improve the accuracy of the diagnosis of benign and malignant BI-RADS 4 nodules. It can function as a relatively accurate predictive tool for sonographers and clinicians and is therefore clinically useful. ADVANCES IN KNOWLEDGE STATEMENT: we retrospectively analyzed the clinical and ultrasound characteristics of ABUS BI-RADS 4 nodules and established a nomogram model to improve the efficiency of the majority of ABUS readers in the diagnosis of BI-RADS 4 nodules.
Collapse
Affiliation(s)
- Xianping Jiang
- Department of Ultrasound, Shengzhou People's Hospital (Shengzhou Branch of the First Affiliated Hospital of Zhejiang University School of Medicine, the Shengzhou Hospital of Shaoxing University), Shengzhou, 312400, China
| | - Chen Chen
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China
| | - Jincao Yao
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China
| | - Liping Wang
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China
| | - Chen Yang
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China
| | - Wei Li
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China
| | - Di Ou
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China
| | - Zhiyan Jin
- Postgraduate training base Alliance of Wenzhou Medical University, Hangzhou, 310022, China
| | - Yuanzhen Liu
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China
| | - Chanjuan Peng
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China
| | - Yifan Wang
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China.
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China.
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China.
| | - Dong Xu
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, No.1 East Banshan Road, Gongshu District, Hangzhou, Zhejiang, 310022, China.
- Center of Intelligent Diagnosis and Therapy (Taizhou), Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Taizhou, 317502, China.
- Wenling Institute of Big Data and Artificial Intelligence in Medicine, Taizhou, 317502, China.
| |
Collapse
|
5
|
Liu F, Li G, Wang J. Advanced analytical methods for multi-spectral transmission imaging optimization: enhancing breast tissue heterogeneity detection and tumor screening with hybrid image processing and deep learning. ANALYTICAL METHODS : ADVANCING METHODS AND APPLICATIONS 2024; 17:104-123. [PMID: 39569814 DOI: 10.1039/d4ay01755b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2024]
Abstract
Light sources exhibit significant absorption and scattering effects during the transmission through biological tissues, posing challenges in identifying heterogeneities in multi-spectral images. This paper introduces a fusion of techniques encompassing the spatial pyramid matching model (SPM), modulation and demodulation (M_D), and frame accumulation (FA). These techniques not only elevate image quality but also augment the precision of heterogeneous classification in multi-spectral transmission images (MTI) within deep learning network models (DLNM). Initially, experiments are designed to capture MTI of phantoms. Subsequently, the images are preprocessed separately through a combination of different techniques such as SPM, M_D and FA. Ultimately, multi-spectral fusion pseudo-color images derived from U-Net semantic segmentation are fed into VGG16/19 and ResNet50/101 networks for heterogeneous classification. Among them, different combinations of SPM, M_D and FA significantly enhance the quality of images, facilitating the extraction of heterogeneous feature information from multi-spectral images. In comparison to the classification accuracy achieved in the original image VGG and ResNet network models, all images after preprocessing effectively improved the classification accuracy of heterogeneities. Following scatter correction, images processed with 3.5 Hz modulation-demodulation combined with frame accumulation (M_D-FA) attain the highest classification accuracy for heterogeneities in the VGG19 and ResNet101 models, achieving accuracies of 95.47% and 98.47%, respectively. In conclusion, this paper utilizes different combinations of SPM, M_D and FA techniques to not only enhance the quality of images but also further improve the accuracy of DLNM in heterogeneous classification, which will promote the clinical application of MTI technique in breast tumor screening.
Collapse
Affiliation(s)
- Fulong Liu
- Xuzhou Medical University, School of Medical Information and Engineering, Xuzhou, Jiangsu, 221000, China
| | - Gang Li
- State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China
- Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin 300072, China
| | - Junqi Wang
- Xinyuan Middle School, Xuzhou, Jiangsu, 221000, China.
| |
Collapse
|
6
|
Liu S, Wei G, Fan Y, Chen L, Zhang Z. Multimodal registration network with multi-scale feature-crossing. Int J Comput Assist Radiol Surg 2024; 19:2269-2278. [PMID: 39285109 DOI: 10.1007/s11548-024-03258-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 08/20/2024] [Indexed: 11/07/2024]
Abstract
PURPOSE A critical piece of information for prostate intervention and cancer treatment is provided by the complementary medical imaging modalities of ultrasound (US) and magnetic resonance imaging (MRI). Therefore, MRI-US image fusion is often required during prostate examination to provide contrast-enhanced TRUS, in which image registration is a key step in multimodal image fusion. METHODS We propose a novel multi-scale feature-crossing network for the prostate MRI-US image registration task. We designed a feature-crossing module to enhance information flow in the hidden layer by integrating intermediate features between adjacent scales. Additionally, an attention block utilizing three-dimensional convolution interacts information between channels, improving the correlation between different modal features. We used 100 cases randomly selected from The Cancer Imaging Archive (TCIA) for our experiments. A fivefold cross-validation method was applied, dividing the dataset into five subsets. Four subsets were used for training, and one for testing, repeating this process five times to ensure each subset served as the test set once. RESULTS We test and evaluate our technique using fivefold cross-validation. The cross-validation trials result in a median target registration error of 2.20 mm on landmark centroids and a median Dice of 0.87 on prostate glands, both of which were better than the baseline model. In addition, the standard deviation of the dice similarity coefficient is 0.06, which suggests that the model is stable. CONCLUSION We propose a novel multi-scale feature-crossing network for the prostate MRI-US image registration task. A random selection of 100 cases from The Cancer Imaging Archive (TCIA) was used to test and evaluate our approach using fivefold cross-validation. The experimental results showed that our method improves the registration accuracy. After registration, MRI and TURS images were more similar in structure and morphology, and the location and morphology of cancer were more accurately reflected in the images.
Collapse
Affiliation(s)
- Shuting Liu
- Business School, University of Shanghai for Science and Technology, Jungong Road, Shanghai, 200093, China
| | - Guoliang Wei
- Business School, University of Shanghai for Science and Technology, Jungong Road, Shanghai, 200093, China.
| | - Yi Fan
- Puncture Intelligent Medical Technology Co Ltd, Xinzhuan Road, Shanghai, 201600, China
| | - Lei Chen
- Shanghai Sixth People's Hospital, Yishan Road, Shanghai, 200233, China
| | - Zhaodong Zhang
- Puncture Intelligent Medical Technology Co Ltd, Xinzhuan Road, Shanghai, 201600, China
| |
Collapse
|
7
|
Anari S, de Oliveira GG, Ranjbarzadeh R, Alves AM, Vaz GC, Bendechache M. EfficientUNetViT: Efficient Breast Tumor Segmentation Utilizing UNet Architecture and Pretrained Vision Transformer. Bioengineering (Basel) 2024; 11:945. [PMID: 39329687 PMCID: PMC11429406 DOI: 10.3390/bioengineering11090945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2024] [Accepted: 09/18/2024] [Indexed: 09/28/2024] Open
Abstract
This study introduces a sophisticated neural network structure for segmenting breast tumors. It achieves this by combining a pretrained Vision Transformer (ViT) model with a UNet framework. The UNet architecture, commonly employed for biomedical image segmentation, is further enhanced with depthwise separable convolutional blocks to decrease computational complexity and parameter count, resulting in better efficiency and less overfitting. The ViT, renowned for its robust feature extraction capabilities utilizing self-attention processes, efficiently captures the overall context within images, surpassing the performance of conventional convolutional networks. By using a pretrained ViT as the encoder in our UNet model, we take advantage of its extensive feature representations acquired from extensive datasets, resulting in a major enhancement in the model's ability to generalize and train efficiently. The suggested model has exceptional performance in segmenting breast cancers from medical images, highlighting the advantages of integrating transformer-based encoders with efficient UNet topologies. This hybrid methodology emphasizes the capabilities of transformers in the field of medical image processing and establishes a new standard for accuracy and efficiency in activities related to tumor segmentation.
Collapse
Affiliation(s)
- Shokofeh Anari
- Department of Accounting, Economic and Financial Sciences, Islamic Azad University, South Tehran Branch, Tehran 1584743311, Iran
| | | | - Ramin Ranjbarzadeh
- School of Computing, Faculty of Engineering and Computing, Dublin City University, D09 V209 Dublin, Ireland
| | | | - Gabriel Caumo Vaz
- School of Electrical and Computer Engineering, State University of Campinas, Campinas 13083-852, Brazil
| | - Malika Bendechache
- ADAPT Research Centre, School of Computer Science, University of Galway, H91 TK33 Galway, Ireland
| |
Collapse
|
8
|
Wang L, Wang L, Kuai Z, Tang L, Ou Y, Wu M, Shi T, Ye C, Zhu Y. Progressive Dual Priori Network for Generalized Breast Tumor Segmentation. IEEE J Biomed Health Inform 2024; 28:5459-5472. [PMID: 38843066 DOI: 10.1109/jbhi.2024.3410274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2024]
Abstract
To promote the generalization ability of breast tumor segmentation models, as well as to improve the segmentation performance for breast tumors with smaller size, low-contrast and irregular shape, we propose a progressive dual priori network (PDPNet) to segment breast tumors from dynamic enhanced magnetic resonance images (DCE-MRI) acquired at different centers. The PDPNet first cropped tumor regions with a coarse-segmentation based localization module, then the breast tumor mask was progressively refined by using the weak semantic priori and cross-scale correlation prior knowledge. To validate the effectiveness of PDPNet, we compared it with several state-of-the-art methods on multi-center datasets. The results showed that, comparing against the suboptimal method, the DSC and HD95 of PDPNet were improved at least by 5.13% and 7.58% respectively on multi-center test sets. In addition, through ablations, we demonstrated that the proposed localization module can decrease the influence of normal tissues and therefore improve the generalization ability of the model. The weak semantic priors allow focusing on tumor regions to avoid missing small tumors and low-contrast tumors. The cross-scale correlation priors are beneficial for promoting the shape-aware ability for irregular tumors. Thus integrating them in a unified framework improved the multi-center breast tumor segmentation performance.
Collapse
|
9
|
Barekatrezaei S, Kozegar E, Salamati M, Soryani M. Mass detection in automated three dimensional breast ultrasound using cascaded convolutional neural networks. Phys Med 2024; 124:103433. [PMID: 39002423 DOI: 10.1016/j.ejmp.2024.103433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 07/03/2024] [Accepted: 07/08/2024] [Indexed: 07/15/2024] Open
Abstract
PURPOSE Early detection of breast cancer has a significant effect on reducing its mortality rate. For this purpose, automated three-dimensional breast ultrasound (3-D ABUS) has been recently used alongside mammography. The 3-D volume produced in this imaging system includes many slices. The radiologist must review all the slices to find the mass, a time-consuming task with a high probability of mistakes. Therefore, many computer-aided detection (CADe) systems have been developed to assist radiologists in this task. In this paper, we propose a novel CADe system for mass detection in 3-D ABUS images. METHODS The proposed system includes two cascaded convolutional neural networks. The goal of the first network is to achieve the highest possible sensitivity, and the second network's goal is to reduce false positives while maintaining high sensitivity. In both networks, an improved version of 3-D U-Net architecture is utilized in which two types of modified Inception modules are used in the encoder section. In the second network, new attention units are also added to the skip connections that receive the results of the first network as saliency maps. RESULTS The system was evaluated on a dataset containing 60 3-D ABUS volumes from 43 patients and 55 masses. A sensitivity of 91.48% and a mean false positive of 8.85 per patient were achieved. CONCLUSIONS The suggested mass detection system is fully automatic without any user interaction. The results indicate that the sensitivity and the mean FP per patient of the CADe system outperform competing techniques.
Collapse
Affiliation(s)
- Sepideh Barekatrezaei
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran.
| | - Ehsan Kozegar
- Department of Computer Engineering and Engineering Sciences, Faculty of Technology and Engineering, University of Guilan, Rudsar-Vajargah, Guilan, Iran.
| | - Masoumeh Salamati
- Department of Reproductive Imaging, Reproductive Biomedicine Research Center, Royan Institute for Reproductive Biomedicine, ACECR, Tehran, Iran.
| | - Mohsen Soryani
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran.
| |
Collapse
|
10
|
Yangue E, Li Y, Ranjan A, Liu C. An Adaptive Image Segmentation Approach for Tumor Region Identification in Ultrasound Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-6. [PMID: 40031473 DOI: 10.1109/embc53108.2024.10782614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Identifying and recognizing the tumor region, i.e., region of interest (ROI), in medical imaging always plays a critical role in image-guided drug delivery (IGDD). Recent cutting-edge studies have demonstrated the great potential to leverage ultrasound images in IGDD. However, the effect of interference also results in significant challenges to automatically identify the ROI in ultrasound images, as the state-of-the-art methods usually do not have enough capability of handling such a high level of interference. Thus, the objective of this work is to develop an ultrasound-oriented image segmentation method for accurate and robust ROI identification. To achieve this goal, this study proposed a novel adaptive approach, termed B-CLEAR, through an efficient collaboration framework among gradient-based Boundary detection, feature-based Center Locating, and an Edge-Assisted Region growing algorithm. The capability of this new method is validated by a real-world ultrasound image dataset, which is collected from the experiments of colon tumor treatment. The comparison with conventional segmentation algorithms has also demonstrated the superior performance of the proposed approach for ROI identification in ultrasound images.
Collapse
|
11
|
Wang S, Sun M, Sun J, Wang Q, Wang G, Wang X, Meng X, Wang Z, Yu H. Advancing musculoskeletal tumor diagnosis: Automated segmentation and predictive classification using deep learning and radiomics. Comput Biol Med 2024; 175:108502. [PMID: 38678943 DOI: 10.1016/j.compbiomed.2024.108502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 03/18/2024] [Accepted: 04/21/2024] [Indexed: 05/01/2024]
Abstract
OBJECTIVES Musculoskeletal (MSK) tumors, given their high mortality rate and heterogeneity, necessitate precise examination and diagnosis to guide clinical treatment effectively. Magnetic resonance imaging (MRI) is pivotal in detecting MSK tumors, as it offers exceptional image contrast between bone and soft tissue. This study aims to enhance the speed of detection and the diagnostic accuracy of MSK tumors through automated segmentation and grading utilizing MRI. MATERIALS AND METHODS The research included 170 patients (mean age, 58 years ±12 (standard deviation), 84 men) with MSK lesions, who underwent MRI scans from April 2021 to May 2023. We proposed a deep learning (DL) segmentation model MSAPN based on multi-scale attention and pixel-level reconstruction, and compared it with existing algorithms. Using MSAPN-segmented lesions to extract their radiomic features for the benign and malignant classification of tumors. RESULTS Compared to the most advanced segmentation algorithms, MSAPN demonstrates better performance. The Dice similarity coefficients (DSC) are 0.871 and 0.815 in the testing set and independent validation set, respectively. The radiomics model for classifying benign and malignant lesions achieves an accuracy of 0.890. Moreover, there is no statistically significant difference between the radiomics model based on manual segmentation and MSAPN segmentation. CONCLUSION This research contributes to the advancement of MSK tumor diagnosis through automated segmentation and predictive classification. The integration of DL algorithms and radiomics shows promising results, and the visualization analysis of feature maps enhances clinical interpretability.
Collapse
Affiliation(s)
- Shuo Wang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, 300072, China.
| | - Man Sun
- Radiology Department, Tianjin University Tianjin Hospital, Tianjin, 300299, China.
| | - Jinglai Sun
- The School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, 300072, China.
| | - Qingsong Wang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, 300072, China.
| | - Guangpu Wang
- The School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, 300072, China.
| | - Xiaolin Wang
- The School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, 300072, China.
| | - Xianghong Meng
- Radiology Department, Tianjin University Tianjin Hospital, Tianjin, 300299, China.
| | - Zhi Wang
- Radiology Department, Tianjin University Tianjin Hospital, Tianjin, 300299, China.
| | - Hui Yu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, 300072, China; The School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, 300072, China.
| |
Collapse
|
12
|
Li Y, Ren Y, Cheng Z, Sun J, Pan P, Chen H. Automatic breast ultrasound (ABUS) tumor segmentation based on global and local feature fusion. Phys Med Biol 2024; 69:115039. [PMID: 38759673 DOI: 10.1088/1361-6560/ad4d53] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Accepted: 05/17/2024] [Indexed: 05/19/2024]
Abstract
Accurate segmentation of tumor regions in automated breast ultrasound (ABUS) images is of paramount importance in computer-aided diagnosis system. However, the inherent diversity of tumors and the imaging interference pose great challenges to ABUS tumor segmentation. In this paper, we propose a global and local feature interaction model combined with graph fusion (GLGM), for 3D ABUS tumor segmentation. In GLGM, we construct a dual branch encoder-decoder, where both local and global features can be extracted. Besides, a global and local feature fusion module is designed, which employs the deepest semantic interaction to facilitate information exchange between local and global features. Additionally, to improve the segmentation performance for small tumors, a graph convolution-based shallow feature fusion module is designed. It exploits the shallow feature to enhance the feature expression of small tumors in both local and global domains. The proposed method is evaluated on a private ABUS dataset and a public ABUS dataset. For the private ABUS dataset, the small tumors (volume smaller than 1 cm3) account for over 50% of the entire dataset. Experimental results show that the proposed GLGM model outperforms several state-of-the-art segmentation models in 3D ABUS tumor segmentation, particularly in segmenting small tumors.
Collapse
Affiliation(s)
- Yanfeng Li
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, People's Republic of China
| | - Yihan Ren
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, People's Republic of China
| | - Zhanyi Cheng
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, People's Republic of China
| | - Jia Sun
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, People's Republic of China
| | - Pan Pan
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, People's Republic of China
| | - Houjin Chen
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, People's Republic of China
| |
Collapse
|
13
|
Chen Q, Zhang J, Meng R, Zhou L, Li Z, Feng Q, Shen D. Modality-Specific Information Disentanglement From Multi-Parametric MRI for Breast Tumor Segmentation and Computer-Aided Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1958-1971. [PMID: 38206779 DOI: 10.1109/tmi.2024.3352648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/13/2024]
Abstract
Breast cancer is becoming a significant global health challenge, with millions of fatalities annually. Magnetic Resonance Imaging (MRI) can provide various sequences for characterizing tumor morphology and internal patterns, and becomes an effective tool for detection and diagnosis of breast tumors. However, previous deep-learning based tumor segmentation methods from multi-parametric MRI still have limitations in exploring inter-modality information and focusing task-informative modality/modalities. To address these shortcomings, we propose a Modality-Specific Information Disentanglement (MoSID) framework to extract both inter- and intra-modality attention maps as prior knowledge for guiding tumor segmentation. Specifically, by disentangling modality-specific information, the MoSID framework provides complementary clues for the segmentation task, by generating modality-specific attention maps to guide modality selection and inter-modality evaluation. Our experiments on two 3D breast datasets and one 2D prostate dataset demonstrate that the MoSID framework outperforms other state-of-the-art multi-modality segmentation methods, even in the cases of missing modalities. Based on the segmented lesions, we further train a classifier to predict the patients' response to radiotherapy. The prediction accuracy is comparable to the case of using manually-segmented tumors for treatment outcome prediction, indicating the robustness and effectiveness of the proposed segmentation method. The code is available at https://github.com/Qianqian-Chen/MoSID.
Collapse
|
14
|
Zimmermann C, Michelmann A, Daniel Y, Enderle MD, Salkic N, Linzenbold W. Application of Deep Learning for Real-Time Ablation Zone Measurement in Ultrasound Imaging. Cancers (Basel) 2024; 16:1700. [PMID: 38730652 PMCID: PMC11083655 DOI: 10.3390/cancers16091700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 04/24/2024] [Accepted: 04/26/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND The accurate delineation of ablation zones (AZs) is crucial for assessing radiofrequency ablation (RFA) therapy's efficacy. Manual measurement, the current standard, is subject to variability and potential inaccuracies. AIM This study aims to assess the effectiveness of Artificial Intelligence (AI) in automating AZ measurements in ultrasound images and compare its accuracy with manual measurements in ultrasound images. METHODS An in vitro study was conducted using chicken breast and liver samples subjected to bipolar RFA. Ultrasound images were captured every 15 s, with the AI model Mask2Former trained for AZ segmentation. The measurements were compared across all methods, focusing on short-axis (SA) metrics. RESULTS We performed 308 RFA procedures, generating 7275 ultrasound images across liver and chicken breast tissues. Manual and AI measurement comparisons for ablation zone diameters revealed no significant differences, with correlation coefficients exceeding 0.96 in both tissues (p < 0.001). Bland-Altman plots and a Deming regression analysis demonstrated a very close alignment between AI predictions and manual measurements, with the average difference between the two methods being -0.259 and -0.243 mm, for bovine liver and chicken breast tissue, respectively. CONCLUSION The study validates the Mask2Former model as a promising tool for automating AZ measurement in RFA research, offering a significant step towards reducing manual measurement variability.
Collapse
Affiliation(s)
| | | | | | | | - Nermin Salkic
- Erbe Elektromedizin GmbH, 72072 Tübingen, Germany
- Faculty of Medicine, University of Tuzla, 75000 Tuzla, Bosnia and Herzegovina
| | | |
Collapse
|
15
|
Wu L, Xia D, Wang J, Chen S, Cui X, Shen L, Huang Y. Deep Learning Detection and Segmentation of Facet Joints in Ultrasound Images Based on Convolutional Neural Networks and Enhanced Data Annotation. Diagnostics (Basel) 2024; 14:755. [PMID: 38611668 PMCID: PMC11011346 DOI: 10.3390/diagnostics14070755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 03/28/2024] [Accepted: 03/28/2024] [Indexed: 04/14/2024] Open
Abstract
The facet joint injection is the most common procedure used to release lower back pain. In this paper, we proposed a deep learning method for detecting and segmenting facet joints in ultrasound images based on convolutional neural networks (CNNs) and enhanced data annotation. In the enhanced data annotation, a facet joint was considered as the first target and the ventral complex as the second target to improve the capability of CNNs in recognizing the facet joint. A total of 300 cases of patients undergoing pain treatment were included. The ultrasound images were captured and labeled by two professional anesthesiologists, and then augmented to train a deep learning model based on the Mask Region-based CNN (Mask R-CNN). The performance of the deep learning model was evaluated using the average precision (AP) on the testing sets. The data augmentation and data annotation methods were found to improve the AP. The AP50 for facet joint detection and segmentation was 90.4% and 85.0%, respectively, demonstrating the satisfying performance of the deep learning model. We presented a deep learning method for facet joint detection and segmentation in ultrasound images based on enhanced data annotation and the Mask R-CNN. The feasibility and potential of deep learning techniques in facet joint ultrasound image analysis have been demonstrated.
Collapse
Affiliation(s)
| | | | | | | | - Xulei Cui
- Department of Anesthesiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100006, China; (L.W.); (D.X.); (J.W.); (S.C.); (L.S.); (Y.H.)
| | | | | |
Collapse
|
16
|
Xia S, Li Q, Zhu HT, Zhang XY, Shi YJ, Yang D, Wu J, Guan Z, Lu Q, Li XT, Sun YS. Fully semantic segmentation for rectal cancer based on post-nCRT MRl modality and deep learning framework. BMC Cancer 2024; 24:315. [PMID: 38454349 PMCID: PMC10919051 DOI: 10.1186/s12885-024-11997-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 02/13/2024] [Indexed: 03/09/2024] Open
Abstract
PURPOSE Rectal tumor segmentation on post neoadjuvant chemoradiotherapy (nCRT) magnetic resonance imaging (MRI) has great significance for tumor measurement, radiomics analysis, treatment planning, and operative strategy. In this study, we developed and evaluated segmentation potential exclusively on post-chemoradiation T2-weighted MRI using convolutional neural networks, with the aim of reducing the detection workload for radiologists and clinicians. METHODS A total of 372 consecutive patients with LARC were retrospectively enrolled from October 2015 to December 2017. The standard-of-care neoadjuvant process included 22-fraction intensity-modulated radiation therapy and oral capecitabine. Further, 243 patients (3061 slices) were grouped into training and validation datasets with a random 80:20 split, and 41 patients (408 slices) were used as the test dataset. A symmetric eight-layer deep network was developed using the nnU-Net Framework, which outputs the segmentation result with the same size. The trained deep learning (DL) network was examined using fivefold cross-validation and tumor lesions with different TRGs. RESULTS At the stage of testing, the Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and mean surface distance (MSD) were applied to quantitatively evaluate the performance of generalization. Considering the test dataset (41 patients, 408 slices), the average DSC, HD95, and MSD were 0.700 (95% CI: 0.680-0.720), 17.73 mm (95% CI: 16.08-19.39), and 3.11 mm (95% CI: 2.67-3.56), respectively. Eighty-two percent of the MSD values were less than 5 mm, and fifty-five percent were less than 2 mm (median 1.62 mm, minimum 0.07 mm). CONCLUSIONS The experimental results indicated that the constructed pipeline could achieve relatively high accuracy. Future work will focus on assessing the performances with multicentre external validation.
Collapse
Affiliation(s)
- Shaojun Xia
- Institute of Medical Technology, Peking University Health Science Center, Haidian District, No. 38 Xueyuan Road, Beijing, 100191, China
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Qingyang Li
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Hai-Tao Zhu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Xiao-Yan Zhang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Yan-Jie Shi
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Ding Yang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Jiaqi Wu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Zhen Guan
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Qiaoyuan Lu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Xiao-Ting Li
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China
| | - Ying-Shi Sun
- Institute of Medical Technology, Peking University Health Science Center, Haidian District, No. 38 Xueyuan Road, Beijing, 100191, China.
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/ Beijing), Department of Radiology, Peking University Cancer Hospital & Institute, Hai Dian District, No. 52 Fu Cheng Road, Beijing, 100142, China.
| |
Collapse
|
17
|
Oh K, Lee SE, Kim EK. 3-D breast nodule detection on automated breast ultrasound using faster region-based convolutional neural networks and U-Net. Sci Rep 2023; 13:22625. [PMID: 38114666 PMCID: PMC10730541 DOI: 10.1038/s41598-023-49794-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 12/12/2023] [Indexed: 12/21/2023] Open
Abstract
Mammography is currently the most commonly used modality for breast cancer screening. However, its sensitivity is relatively low in women with dense breasts. Dense breast tissues show a relatively high rate of interval cancers and are at high risk for developing breast cancer. As a supplemental screening tool, ultrasonography is a widely adopted imaging modality to standard mammography, especially for dense breasts. Lately, automated breast ultrasound imaging has gained attention due to its advantages over hand-held ultrasound imaging. However, automated breast ultrasound imaging requires considerable time and effort for reading because of the lengthy data. Hence, developing a computer-aided nodule detection system for automated breast ultrasound is invaluable and impactful practically. This study proposes a three-dimensional breast nodule detection system based on a simple two-dimensional deep-learning model exploiting automated breast ultrasound. Additionally, we provide several postprocessing steps to reduce false positives. In our experiments using the in-house automated breast ultrasound datasets, a sensitivity of [Formula: see text] with 8.6 false positives is achieved on unseen test data at best.
Collapse
Affiliation(s)
- Kangrok Oh
- Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, 03722, Republic of Korea
| | - Si Eun Lee
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin, Gyeonggi-do, 16995, Republic of Korea
| | - Eun-Kyung Kim
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin, Gyeonggi-do, 16995, Republic of Korea.
| |
Collapse
|
18
|
Cai R, Liu Y, Sun Z, Wang Y, Wang Y, Li F, Jiang H. Deep-learning based segmentation of ultrasound adipose image for liposuction. Int J Med Robot 2023; 19:e2548. [PMID: 37448348 DOI: 10.1002/rcs.2548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 06/25/2023] [Accepted: 06/27/2023] [Indexed: 07/15/2023]
Abstract
BACKGROUND To develop an automatic and reliable ultrasonic visual system for robot- or computer-assisted liposuction, we examined the use of deep learning for the segmentation of adipose ultrasound images in clinical and educational settings. METHODS To segment adipose layers, it is proposed to use an Attention Skip-Convolutions ResU-Net (Attention SCResU-Net) consisting of SC residual blocks, attention gates and U-Net architecture. Transfer learning is utilised to compensate for the deficiency of clinical data. The Bama pig and clinical human adipose ultrasound image datasets are utilized, respectively. RESULTS The final model obtains a Dice of 99.06 ± 0.95% and an ASD of 0.19 ± 0.18 mm on clinical datasets, outperforming other methods. By fine-tuning the eight deepest layers, accurate and stable segmentation results are obtained. CONCLUSIONS The new deep-learning method achieves the accurate and automatic segmentation of adipose ultrasound images in real-time, thereby enhancing the safety of liposuction and enabling novice surgeons to better control the cannula.
Collapse
Affiliation(s)
- Ruxin Cai
- Beihang University, School of Biological Science and Medical Engineering, Beijing, China
| | - Yanzhen Liu
- Beihang University, School of Biological Science and Medical Engineering, Beijing, China
| | - Zhibin Sun
- Beihang University, School of Biological Science and Medical Engineering, Beijing, China
| | - Yuneng Wang
- Chinese Academy of Medical Sciences and Peking Union Medical College, Plastic Surgery Hospital, Beijing, China
| | - Yu Wang
- Beihang University, School of Biological Science and Medical Engineering, Beijing, China
| | - Facheng Li
- Chinese Academy of Medical Sciences and Peking Union Medical College, Plastic Surgery Hospital, Beijing, China
| | - Haiyue Jiang
- Chinese Academy of Medical Sciences and Peking Union Medical College, Plastic Surgery Hospital, Beijing, China
| |
Collapse
|
19
|
Yang C, Zhou Q, Li M, Xu L, Zeng Y, Liu J, Wei Y, Shi F, Chen J, Li P, Shu Y, Yang L, Shu J. MRI-based automatic identification and segmentation of extrahepatic cholangiocarcinoma using deep learning network. BMC Cancer 2023; 23:1089. [PMID: 37950207 PMCID: PMC10636947 DOI: 10.1186/s12885-023-11575-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 10/27/2023] [Indexed: 11/12/2023] Open
Abstract
BACKGROUND Accurate identification of extrahepatic cholangiocarcinoma (ECC) from an image is challenging because of the small size and complex background structure. Therefore, considering the limitation of manual delineation, it's necessary to develop automated identification and segmentation methods for ECC. The aim of this study was to develop a deep learning approach for automatic identification and segmentation of ECC using MRI. METHODS We recruited 137 ECC patients from our hospital as the main dataset (C1) and an additional 40 patients from other hospitals as the external validation set (C2). All patients underwent axial T1-weighted imaging (T1WI), T2-weighted imaging (T2WI), and diffusion-weighted imaging (DWI). Manual delineations were performed and served as the ground truth. Next, we used 3D VB-Net to establish single-mode automatic identification and segmentation models based on T1WI (model 1), T2WI (model 2), and DWI (model 3) in the training cohort (80% of C1), and compared them with the combined model (model 4). Subsequently, the generalization capability of the best models was evaluated using the testing set (20% of C1) and the external validation set (C2). Finally, the performance of the developed models was further evaluated. RESULTS Model 3 showed the best identification performance in the training, testing, and external validation cohorts with success rates of 0.980, 0.786, and 0.725, respectively. Furthermore, model 3 yielded an average Dice similarity coefficient (DSC) of 0.922, 0.495, and 0.466 to segment ECC automatically in the training, testing, and external validation cohorts, respectively. CONCLUSION The DWI-based model performed better in automatically identifying and segmenting ECC compared to T1WI and T2WI, which may guide clinical decisions and help determine prognosis.
Collapse
Affiliation(s)
- Chunmei Yang
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Qin Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Mingdong Li
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Lulu Xu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Yanyan Zeng
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Jiong Liu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jing Chen
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Pinxiong Li
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Yue Shu
- Department of Oncology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Lu Yang
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Jian Shu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China.
| |
Collapse
|
20
|
Zhang J, Wu J, Zhou XS, Shi F, Shen D. Recent advancements in artificial intelligence for breast cancer: Image augmentation, segmentation, diagnosis, and prognosis approaches. Semin Cancer Biol 2023; 96:11-25. [PMID: 37704183 DOI: 10.1016/j.semcancer.2023.09.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 08/03/2023] [Accepted: 09/05/2023] [Indexed: 09/15/2023]
Abstract
Breast cancer is a significant global health burden, with increasing morbidity and mortality worldwide. Early screening and accurate diagnosis are crucial for improving prognosis. Radiographic imaging modalities such as digital mammography (DM), digital breast tomosynthesis (DBT), magnetic resonance imaging (MRI), ultrasound (US), and nuclear medicine techniques, are commonly used for breast cancer assessment. And histopathology (HP) serves as the gold standard for confirming malignancy. Artificial intelligence (AI) technologies show great potential for quantitative representation of medical images to effectively assist in segmentation, diagnosis, and prognosis of breast cancer. In this review, we overview the recent advancements of AI technologies for breast cancer, including 1) improving image quality by data augmentation, 2) fast detection and segmentation of breast lesions and diagnosis of malignancy, 3) biological characterization of the cancer such as staging and subtyping by AI-based classification technologies, 4) prediction of clinical outcomes such as metastasis, treatment response, and survival by integrating multi-omics data. Then, we then summarize large-scale databases available to help train robust, generalizable, and reproducible deep learning models. Furthermore, we conclude the challenges faced by AI in real-world applications, including data curating, model interpretability, and practice regulations. Besides, we expect that clinical implementation of AI will provide important guidance for the patient-tailored management.
Collapse
Affiliation(s)
- Jiadong Zhang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
21
|
Lei Y, Wang T, Roper J, Tian S, Patel P, Bradley JD, Jani AB, Liu T, Yang X. Automatic segmentation of neurovascular bundle on mri using deep learning based topological modulated network. Med Phys 2023; 50:5479-5488. [PMID: 36939189 PMCID: PMC10509305 DOI: 10.1002/mp.16378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 01/20/2023] [Accepted: 03/09/2023] [Indexed: 03/21/2023] Open
Abstract
PURPOSE Radiation damage on neurovascular bundles (NVBs) may be the cause of sexual dysfunction after radiotherapy for prostate cancer. However, it is challenging to delineate NVBs as organ-at-risks from planning CTs during radiotherapy. Recently, the integration of MR into radiotherapy made NVBs contour delineating possible. In this study, we aim to develop an MRI-based deep learning method for automatic NVB segmentation. METHODS The proposed method, named topological modulated network, consists of three subnetworks, that is, a focal modulation, a hierarchical block and a topological fully convolutional network (FCN). The focal modulation is used to derive the location and bounds of left and right NVBs', namely the candidate volume-of-interests (VOIs). The hierarchical block aims to highlight the NVB boundaries information on derived feature map. The topological FCN then segments the NVBs inside the VOIs by considering the topological consistency nature of the vascular delineating. Based on the location information of candidate VOIs, the segmentations of NVBs can then be brought back to the input MRI's coordinate system. RESULTS A five-fold cross-validation study was performed on 60 patient cases to evaluate the performance of the proposed method. The segmented results were compared with manual contours. The Dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (HD95 ) are (left NVB) 0.81 ± 0.10, 1.49 ± 0.88 mm, and (right NVB) 0.80 ± 0.15, 1.54 ± 1.22 mm, respectively. CONCLUSION We proposed a novel deep learning-based segmentation method for NVBs on pelvic MR images. The good segmentation agreement of our method with the manually drawn ground truth contours supports the feasibility of the proposed method, which can be potentially used to spare NVBs during proton and photon radiotherapy and thereby improve the quality of life for prostate cancer patients.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
22
|
Yue WY, Zhang HT, Gao S, Li G, Sun ZY, Tang Z, Cai JM, Tian N, Zhou J, Dong JH, Liu Y, Bai X, Sheng FG. Predicting Breast Cancer Subtypes Using Magnetic Resonance Imaging Based Radiomics With Automatic Segmentation. J Comput Assist Tomogr 2023; 47:729-737. [PMID: 37707402 PMCID: PMC10510832 DOI: 10.1097/rct.0000000000001474] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 02/02/2023] [Indexed: 05/21/2023]
Abstract
OBJECTIVE The aim of the study is to demonstrate whether radiomics based on an automatic segmentation method is feasible for predicting molecular subtypes. METHODS This retrospective study included 516 patients with confirmed breast cancer. An automatic segmentation-3-dimensional UNet-based Convolutional Neural Networks, trained on our in-house data set-was applied to segment the regions of interest. A set of 1316 radiomics features per region of interest was extracted. Eighteen cross-combination radiomics methods-with 6 feature selection methods and 3 classifiers-were used for model selection. Model classification performance was assessed using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity. RESULTS The average dice similarity coefficient value of the automatic segmentation was 0.89. The radiomics models were predictive of 4 molecular subtypes with the best average: AUC = 0.8623, accuracy = 0.6596, sensitivity = 0.6383, and specificity = 0.8775. For luminal versus nonluminal subtypes, AUC = 0.8788 (95% confidence interval [CI], 0.8505-0.9071), accuracy = 0.7756, sensitivity = 0.7973, and specificity = 0.7466. For human epidermal growth factor receptor 2 (HER2)-enriched versus non-HER2-enriched subtypes, AUC = 0.8676 (95% CI, 0.8370-0.8982), accuracy = 0.7737, sensitivity = 0.8859, and specificity = 0.7283. For triple-negative breast cancer versus non-triple-negative breast cancer subtypes, AUC = 0.9335 (95% CI, 0.9027-0.9643), accuracy = 0.9110, sensitivity = 0.4444, and specificity = 0.9865. CONCLUSIONS Radiomics based on automatic segmentation of magnetic resonance imaging can predict breast cancer of 4 molecular subtypes noninvasively and is potentially applicable in large samples.
Collapse
Affiliation(s)
- Wen-Yi Yue
- From the Fifth Medical Center of Chinese PLA General Hospital
- Chinese PLA General Medical School
| | - Hong-Tao Zhang
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Shen Gao
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Guang Li
- Keya Medical Technology Co, Ltd, Beijing, China
| | - Ze-Yu Sun
- Keya Medical Technology Co, Ltd, Beijing, China
| | - Zhe Tang
- Keya Medical Technology Co, Ltd, Beijing, China
| | - Jian-Ming Cai
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Ning Tian
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Juan Zhou
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Jing-Hui Dong
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Yuan Liu
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Xu Bai
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Fu-Geng Sheng
- From the Fifth Medical Center of Chinese PLA General Hospital
| |
Collapse
|
23
|
Liu Z, Lv Q, Yang Z, Li Y, Lee CH, Shen L. Recent progress in transformer-based medical image analysis. Comput Biol Med 2023; 164:107268. [PMID: 37494821 DOI: 10.1016/j.compbiomed.2023.107268] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/30/2023] [Accepted: 07/16/2023] [Indexed: 07/28/2023]
Abstract
The transformer is primarily used in the field of natural language processing. Recently, it has been adopted and shows promise in the computer vision (CV) field. Medical image analysis (MIA), as a critical branch of CV, also greatly benefits from this state-of-the-art technique. In this review, we first recap the core component of the transformer, the attention mechanism, and the detailed structures of the transformer. After that, we depict the recent progress of the transformer in the field of MIA. We organize the applications in a sequence of different tasks, including classification, segmentation, captioning, registration, detection, enhancement, localization, and synthesis. The mainstream classification and segmentation tasks are further divided into eleven medical image modalities. A large number of experiments studied in this review illustrate that the transformer-based method outperforms existing methods through comparisons with multiple evaluation metrics. Finally, we discuss the open challenges and future opportunities in this field. This task-modality review with the latest contents, detailed information, and comprehensive comparison may greatly benefit the broad MIA community.
Collapse
Affiliation(s)
- Zhaoshan Liu
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| | - Qiujie Lv
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore; School of Intelligent Systems Engineering, Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, 518107, China.
| | - Ziduo Yang
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore; School of Intelligent Systems Engineering, Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, 518107, China.
| | - Yifan Li
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| | - Chau Hung Lee
- Department of Radiology, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, Singapore, 308433, Singapore.
| | - Lei Shen
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| |
Collapse
|
24
|
Tang S, Deng Z. CS-based multi-task learning network for arrhythmia reconstruction and classification using ECG signals. Physiol Meas 2023; 44:075001. [PMID: 37336244 DOI: 10.1088/1361-6579/acdfb5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Accepted: 06/19/2023] [Indexed: 06/21/2023]
Abstract
Objective. Although deep learning-based current methods have achieved impressive results in electrocardiograph (ECG) arrhythmia classification issues, they rely on using the original data to identify arrhythmia categories. However, a large amount of data generated by long-term ECG monitoring pose a significant challenge to the limited-bandwidth and real-time systems, which limits the application of deep learning in ECG monitoring.Approach. This paper, therefore, proposed a novel multi-task network that combined compressed sensing and convolutional neural networks, namely CSML-Net. According to the proposed model, the ECG signals were compressed by utilizing a learning measurement matrix and then recovered and classified simultaneously via shared layers and two task branches. Among them, the multi-scale feature module was designed to improve model performance.Main results. Experimental results on the MIT-BIH arrhythmia dataset demonstrate that our proposed method is superior to all the approaches that have been compared in terms of reconstruction quality and classification performance.Significance. Consequently, the proposed model achieving the reconstruction and classification in the compressed domain can be an improvement and become a promising approach for ECG arrhythmia reconstruction and classification.
Collapse
Affiliation(s)
- Suigu Tang
- The Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Macau 999078, People's Republic of China
| | - Zicong Deng
- The Guangzhou Vocational College of Technology & Business, Guangzhou 511442, People's Republic of China
| |
Collapse
|
25
|
Ru J, Lu B, Chen B, Shi J, Chen G, Wang M, Pan Z, Lin Y, Gao Z, Zhou J, Liu X, Zhang C. Attention guided neural ODE network for breast tumor segmentation in medical images. Comput Biol Med 2023; 159:106884. [PMID: 37071938 DOI: 10.1016/j.compbiomed.2023.106884] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 01/25/2023] [Accepted: 03/30/2023] [Indexed: 04/05/2023]
Abstract
Breast cancer is the most common cancer in women. Ultrasound is a widely used screening tool for its portability and easy operation, and DCE-MRI can highlight the lesions more clearly and reveal the characteristics of tumors. They are both noninvasive and nonradiative for assessment of breast cancer. Doctors make diagnoses and further instructions through the sizes, shapes and textures of the breast masses showed on medical images, so automatic tumor segmentation via deep neural networks can to some extent assist doctors. Compared to some challenges which the popular deep neural networks have faced, such as large amounts of parameters, lack of interpretability, overfitting problem, etc., we propose a segmentation network named Att-U-Node which uses attention modules to guide a neural ODE-based framework, trying to alleviate the problems mentioned above. Specifically, the network uses ODE blocks to make up an encoder-decoder structure, feature modeling by neural ODE is completed at each level. Besides, we propose to use an attention module to calculate the coefficient and generate a much refined attention feature for skip connection. Three public available breast ultrasound image datasets (i.e. BUSI, BUS and OASBUD) and a private breast DCE-MRI dataset are used to assess the efficiency of the proposed model, besides, we upgrade the model to 3D for tumor segmentation with the data selected from Public QIN Breast DCE-MRI. The experiments show that the proposed model achieves competitive results compared with the related methods while mitigates the common problems of deep neural networks.
Collapse
Affiliation(s)
- Jintao Ru
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Beichen Lu
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Buran Chen
- Department of Thyroid and Breast Surgery, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Jialin Shi
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Gaoxiang Chen
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Meihao Wang
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China; Key Laboratory of Intelligent Medical Imaging of Wenzhou, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China.
| | - Zhifang Pan
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China; Zhejiang Engineering Research Center of Intelligent Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China.
| | - Yezhi Lin
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China; Key Laboratory of Intelligent Treatment and Life Support for Critical Diseases of Zhejiang Province, Wenzhou, 325000, People's Republic of China.
| | - Zhihong Gao
- Zhejiang Engineering Research Center of Intelligent Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Jiejie Zhou
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Xiaoming Liu
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, 430065, People's Republic of China
| | - Chen Zhang
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| |
Collapse
|
26
|
Zhang Y, Dai X, Tian Z, Lei Y, Wynne JF, Patel P, Chen Y, Liu T, Yang X. Landmark tracking in liver US images using cascade convolutional neural networks with long short-term memory. MEASUREMENT SCIENCE & TECHNOLOGY 2023; 34:054002. [PMID: 36743834 PMCID: PMC9893725 DOI: 10.1088/1361-6501/acb5b3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 01/03/2023] [Accepted: 01/24/2023] [Indexed: 05/13/2023]
Abstract
Accurate tracking of anatomic landmarks is critical for motion management in liver radiation therapy. Ultrasound (US) is a safe, low-cost technology that is broadly available and offer real-time imaging capability. This study proposed a deep learning-based tracking method for the US image-guided radiation therapy. The proposed cascade deep learning model is composed of an attention network, a mask region-based convolutional neural network (mask R-CNN), and a long short-term memory (LSTM) network. The attention network learns a mapping from an US image to a suspected area of landmark motion in order to reduce the search region. The mask R-CNN then produces multiple region-of-interest proposals in the reduced region and identifies the proposed landmark via three network heads: bounding box regression, proposal classification, and landmark segmentation. The LSTM network models the temporal relationship among the successive image frames for bounding box regression and proposal classification. To consolidate the final proposal, a selection method is designed according to the similarities between sequential frames. The proposed method was tested on the liver US tracking datasets used in the medical image computing and computer assisted interventions 2015 challenges, where the landmarks were annotated by three experienced observers to obtain their mean positions. Five-fold cross validation on the 24 given US sequences with ground truths shows that the mean tracking error for all landmarks is 0.65 ± 0.56 mm, and the errors of all landmarks are within 2 mm. We further tested the proposed model on 69 landmarks from the testing dataset that have the similar image pattern with the training pattern, resulting in a mean tracking error of 0.94 ± 0.83 mm. The proposed deep-learning model was implemented on a graphics processing unit (GPU), tracking 47-81 frames s-1. Our experimental results have demonstrated the feasibility and accuracy of our proposed method in tracking liver anatomic landmarks using US images, providing a potential solution for real-time liver tracking for active motion management during radiation therapy.
Collapse
Affiliation(s)
- Yupei Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Xianjin Dai
- Department of Radiation Oncology, Stanford University, Stanford, CA 94035, United States of America
| | - Zhen Tian
- Department of Radiation & Cellular Oncology, University of Chicago, Chicago, IL 60637, United States of America
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Jacob F Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Yue Chen
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University School of Medicine, Atlanta, GA 30322, United States of America
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University School of Medicine, Atlanta, GA 30322, United States of America
| |
Collapse
|
27
|
Machine learning on MRI radiomic features: identification of molecular subtype alteration in breast cancer after neoadjuvant therapy. Eur Radiol 2023; 33:2965-2974. [PMID: 36418622 DOI: 10.1007/s00330-022-09264-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 09/03/2022] [Accepted: 10/22/2022] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Recent studies have revealed the change of molecular subtypes in breast cancer (BC) after neoadjuvant therapy (NAT). This study aims to construct a non-invasive model for predicting molecular subtype alteration in breast cancer after NAT. METHODS Eighty-two estrogen receptor (ER)-negative/ human epidermal growth factor receptor 2 (HER2)-negative or ER-low-positive/HER2-negative breast cancer patients who underwent NAT and completed baseline MRI were retrospectively recruited between July 2010 and November 2020. Subtype alteration was observed in 21 cases after NAT. A 2D-DenseUNet machine-learning model was built to perform automatic segmentation of breast cancer. 851 radiomic features were extracted from each MRI sequence (T2-weighted imaging, ADC, DCE, and contrast-enhanced T1-weighted imaging), both in the manual and auto-segmentation masks. All samples were divided into a training set (n = 66) and a test set (n = 16). XGBoost model with 5-fold cross-validation was performed to predict molecular subtype alterations in breast cancer patients after NAT. The predictive ability of these models was subsequently evaluated by the AUC of the ROC curve, sensitivity, and specificity. RESULTS A model consisting of three radiomics features from the manual segmentation of multi-sequence MRI achieved favorable predictive efficacy in identifying molecular subtype alteration in BC after NAT (cross-validation set: AUC = 0.908, independent test set: AUC = 0.864); whereas an automatic segmentation approach of BC lesions on the DCE sequence produced good segmentation results (Dice similarity coefficient = 0.720). CONCLUSIONS A machine learning model based on baseline MRI is proven useful for predicting molecular subtype alterations in breast cancer after NAT. KEY POINTS • Machine learning models using MRI-based radiomics signature have the ability to predict molecular subtype alterations in breast cancer after neoadjuvant therapy, which subsequently affect treatment protocols. • The application of deep learning in the automatic segmentation of breast cancer lesions from MRI images shows the potential to replace manual segmentation..
Collapse
|
28
|
Zhu Z, Wang SH, Zhang YD. A Survey of Convolutional Neural Network in Breast Cancer. COMPUTER MODELING IN ENGINEERING & SCIENCES : CMES 2023; 136:2127-2172. [PMID: 37152661 PMCID: PMC7614504 DOI: 10.32604/cmes.2023.025484] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 10/28/2022] [Indexed: 05/09/2023]
Abstract
Problems For people all over the world, cancer is one of the most feared diseases. Cancer is one of the major obstacles to improving life expectancy in countries around the world and one of the biggest causes of death before the age of 70 in 112 countries. Among all kinds of cancers, breast cancer is the most common cancer for women. The data showed that female breast cancer had become one of the most common cancers. Aims A large number of clinical trials have proved that if breast cancer is diagnosed at an early stage, it could give patients more treatment options and improve the treatment effect and survival ability. Based on this situation, there are many diagnostic methods for breast cancer, such as computer-aided diagnosis (CAD). Methods We complete a comprehensive review of the diagnosis of breast cancer based on the convolutional neural network (CNN) after reviewing a sea of recent papers. Firstly, we introduce several different imaging modalities. The structure of CNN is given in the second part. After that, we introduce some public breast cancer data sets. Then, we divide the diagnosis of breast cancer into three different tasks: 1. classification; 2. detection; 3. segmentation. Conclusion Although this diagnosis with CNN has achieved great success, there are still some limitations. (i) There are too few good data sets. A good public breast cancer dataset needs to involve many aspects, such as professional medical knowledge, privacy issues, financial issues, dataset size, and so on. (ii) When the data set is too large, the CNN-based model needs a sea of computation and time to complete the diagnosis. (iii) It is easy to cause overfitting when using small data sets.
Collapse
Affiliation(s)
| | | | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK
| |
Collapse
|
29
|
Farooq MU, Ullah Z, Gwak J. Residual attention based uncertainty-guided mean teacher model for semi-supervised breast masses segmentation in 2D ultrasonography. Comput Med Imaging Graph 2023; 104:102173. [PMID: 36641970 DOI: 10.1016/j.compmedimag.2022.102173] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 10/12/2022] [Accepted: 12/27/2022] [Indexed: 01/11/2023]
Abstract
Breast tumor is the second deadliest disease among women around the world. Earlier tumor diagnosis is extremely important for improving the survival rate. Recent deep-learning techniques proved helpful in the timely diagnosis of various tumors. However, in the case of breast tumors, the characteristics of the tumors, i.e., low visual contrast, unclear boundary, and diversity in shape and size of breast lesions, make it more challenging to design a highly efficient detection system. Additionally, the scarcity of publicly available labeled data is also a major hurdle in the development of highly accurate and robust deep-learning models for breast tumor detection. To overcome these issues, we propose residual-attention-based uncertainty-guided mean teacher framework which incorporates the residual and attention blocks. The residual for optimizing the deep network by enabling the flow of high-level features and attention modules improves the focus of the model by optimizing its weights during the learning process. We further explore the potential of utilizing unlabeled data during the training process by employing the semi-supervised learning (SSL) method. Particularly, the uncertainty-guided mean-teacher student architecture is exploited to demonstrate the potential of incorporating the unlabeled samples during the training of residual attention U-Net model. The proposed SSL framework has been rigorously evaluated on two publicly available labeled datasets, i.e., BUSI and UDIAT datasets. The quantitative as well as qualitative results demonstrate that the proposed framework achieved competitive performance with respect to the previous state-of-the-art techniques and outperform the existing breast ultrasound masses segmentation techniques. Most importantly, the study demonstrates the potential of incorporating the additional unlabeled data for improving the performance of breast tumor segmentation.
Collapse
Affiliation(s)
- Muhammad Umar Farooq
- Department of IT Energy Convergence (BK21 FOUR), Korea National University of Transportation, Chungju 27469, South Korea.
| | - Zahid Ullah
- Department of Software, Korea National University of Transportation, Chungju 27469, South Korea.
| | - Jeonghwan Gwak
- Department of IT Energy Convergence (BK21 FOUR), Korea National University of Transportation, Chungju 27469, South Korea; Department of Biomedical Engineering, Korea National University of Transportation, Chungju 27469, South Korea; Department of AI Robotics Engineering, Korea National University of Transportation, Chungju 27469, South Korea.
| |
Collapse
|
30
|
SMDetector: Small mitotic detector in histopathology images using faster R-CNN with dilated convolutions in backbone model. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
31
|
Malekmohammadi A, Barekatrezaei S, Kozegar E, Soryani M. Mass detection in automated 3-D breast ultrasound using a patch Bi-ConvLSTM network. ULTRASONICS 2023; 129:106891. [PMID: 36493507 DOI: 10.1016/j.ultras.2022.106891] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Revised: 10/27/2022] [Accepted: 11/13/2022] [Indexed: 06/17/2023]
Abstract
Breast cancer mortality can be significantly reduced by early detection of its symptoms. The 3-D Automated Breast Ultrasound (ABUS) has been widely used for breast screening due to its high sensitivity and reproducibility. The large number of ABUS slices, and high variation in size and shape of the masses, make the manual evaluation a challenging and time-consuming process. To assist the radiologists, we propose a convolutional BiLSTM network to classify the slices based on the presence of a mass. Because of its patch-based architecture, this model produces the approximate location of masses as a heat map. The prepared dataset consists of 60 volumes belonging to 43 patients. The precision, recall, accuracy, F1-score, and AUC of the proposed model for slice classification were 84%, 84%, 93%, 84%, and 97%, respectively. Based on the FROC analysis, the proposed detector obtained a sensitivity of 82% with two false positives per volume.
Collapse
Affiliation(s)
- Amin Malekmohammadi
- School of Computer Engineering, Iran University of Science and Technology (IUST), Tehran 16846, Iran.
| | - Sepideh Barekatrezaei
- School of Computer Engineering, Iran University of Science and Technology (IUST), Tehran 16846, Iran.
| | - Ehsan Kozegar
- Faculty of Technology and Engineering-East of Guilan, University of Guilan, Vajargah, Rudsar, Guilan 4199613776, Iran.
| | - Mohsen Soryani
- School of Computer Engineering, Iran University of Science and Technology (IUST), Tehran 16846, Iran.
| |
Collapse
|
32
|
ME-CCNN: Multi-encoded images and a cascade convolutional neural network for breast tumor segmentation and recognition. Artif Intell Rev 2023. [DOI: 10.1007/s10462-023-10426-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/21/2023]
|
33
|
Ranjbarzadeh R, Dorosti S, Jafarzadeh Ghoushchi S, Caputo A, Tirkolaee EB, Ali SS, Arshadi Z, Bendechache M. Breast tumor localization and segmentation using machine learning techniques: Overview of datasets, findings, and methods. Comput Biol Med 2023; 152:106443. [PMID: 36563539 DOI: 10.1016/j.compbiomed.2022.106443] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 11/24/2022] [Accepted: 12/15/2022] [Indexed: 12/23/2022]
Abstract
The Global Cancer Statistics 2020 reported breast cancer (BC) as the most common diagnosis of cancer type. Therefore, early detection of such type of cancer would reduce the risk of death from it. Breast imaging techniques are one of the most frequently used techniques to detect the position of cancerous cells or suspicious lesions. Computer-aided diagnosis (CAD) is a particular generation of computer systems that assist experts in detecting medical image abnormalities. In the last decades, CAD has applied deep learning (DL) and machine learning approaches to perform complex medical tasks in the computer vision area and improve the ability to make decisions for doctors and radiologists. The most popular and widely used technique of image processing in CAD systems is segmentation which consists of extracting the region of interest (ROI) through various techniques. This research provides a detailed description of the main categories of segmentation procedures which are classified into three classes: supervised, unsupervised, and DL. The main aim of this work is to provide an overview of each of these techniques and discuss their pros and cons. This will help researchers better understand these techniques and assist them in choosing the appropriate method for a given use case.
Collapse
Affiliation(s)
- Ramin Ranjbarzadeh
- School of Computing, Faculty of Engineering and Computing, Dublin City University, Ireland.
| | - Shadi Dorosti
- Department of Industrial Engineering, Urmia University of Technology, Urmia, Iran.
| | | | - Annalina Caputo
- School of Computing, Faculty of Engineering and Computing, Dublin City University, Ireland.
| | | | - Sadia Samar Ali
- Department of Industrial Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah, Saudi Arabia.
| | - Zahra Arshadi
- Faculty of Electronics, Telecommunications and Physics Engineering, Polytechnic University, Turin, Italy.
| | - Malika Bendechache
- Lero & ADAPT Research Centres, School of Computer Science, University of Galway, Ireland.
| |
Collapse
|
34
|
Lei Y, Wang T, Jeong JJ, Janopaul-Naylor J, Kesarwala AH, Roper J, Tian S, Bradley JD, Liu T, Higgins K, Yang X. Automated lung tumor delineation on positron emission tomography/computed tomography via a hybrid regional network. Med Phys 2023; 50:274-283. [PMID: 36203393 PMCID: PMC9868056 DOI: 10.1002/mp.16001] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 09/20/2022] [Accepted: 09/20/2022] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND Multimodality positron emission tomography/computed tomography (PET/CT) imaging combines the anatomical information of CT with the functional information of PET. In the diagnosis and treatment of many cancers, such as non-small cell lung cancer (NSCLC), PET/CT imaging allows more accurate delineation of tumor or involved lymph nodes for radiation planning. PURPOSE In this paper, we propose a hybrid regional network method of automatically segmenting lung tumors from PET/CT images. METHODS The hybrid regional network architecture synthesizes the functional and anatomical information from the two image modalities, whereas the mask regional convolutional neural network (R-CNN) and scoring fine-tune the regional location and quality of the output segmentation. This model consists of five major subnetworks, that is, a dual feature representation network (DFRN), a regional proposal network (RPN), a specific tumor-wise R-CNN, a mask-Net, and a score head. Given a PET/CT image as inputs, the DFRN extracts feature maps from the PET and CT images. Then, the RPN and R-CNN work together to localize lung tumors and reduce the image size and feature map size by removing irrelevant regions. The mask-Net is used to segment tumor within a volume-of-interest (VOI) with a score head evaluating the segmentation performed by the mask-Net. Finally, the segmented tumor within the VOI was mapped back to the volumetric coordinate system based on the location information derived via the RPN and R-CNN. We trained, validated, and tested the proposed neural network using 100 PET/CT images of patients with NSCLC. A fivefold cross-validation study was performed. The segmentation was evaluated with two indicators: (1) multiple metrics, including the Dice similarity coefficient, Jacard, 95th percentile Hausdorff distance, mean surface distance (MSD), residual mean square distance, and center-of-mass distance; (2) Bland-Altman analysis and volumetric Pearson correlation analysis. RESULTS In fivefold cross-validation, this method achieved Dice and MSD of 0.84 ± 0.15 and 1.38 ± 2.2 mm, respectively. A new PET/CT can be segmented in 1 s by this model. External validation on The Cancer Imaging Archive dataset (63 PET/CT images) indicates that the proposed model has superior performance compared to other methods. CONCLUSION The proposed method shows great promise to automatically delineate NSCLC tumors on PET/CT images, thereby allowing for a more streamlined clinical workflow that is faster and reduces physician effort.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Jiwoong J Jeong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - James Janopaul-Naylor
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Aparna H Kesarwala
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| |
Collapse
|
35
|
Al-hammuri K, Gebali F, Thirumarai Chelvan I, Kanan A. Tongue Contour Tracking and Segmentation in Lingual Ultrasound for Speech Recognition: A Review. Diagnostics (Basel) 2022; 12:diagnostics12112811. [PMID: 36428870 PMCID: PMC9689563 DOI: 10.3390/diagnostics12112811] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 11/07/2022] [Accepted: 11/13/2022] [Indexed: 11/18/2022] Open
Abstract
Lingual ultrasound imaging is essential in linguistic research and speech recognition. It has been used widely in different applications as visual feedback to enhance language learning for non-native speakers, study speech-related disorders and remediation, articulation research and analysis, swallowing study, tongue 3D modelling, and silent speech interface. This article provides a comparative analysis and review based on quantitative and qualitative criteria of the two main streams of tongue contour segmentation from ultrasound images. The first stream utilizes traditional computer vision and image processing algorithms for tongue segmentation. The second stream uses machine and deep learning algorithms for tongue segmentation. The results show that tongue tracking using machine learning-based techniques is superior to traditional techniques, considering the performance and algorithm generalization ability. Meanwhile, traditional techniques are helpful for implementing interactive image segmentation to extract valuable features during training and postprocessing. We recommend using a hybrid approach to combine machine learning and traditional techniques to implement a real-time tongue segmentation tool.
Collapse
Affiliation(s)
- Khalid Al-hammuri
- Department of Electrical and Computer Engineering, University of Victoria, Victoria, BC V8W 2Y2, Canada
- Correspondence:
| | - Fayez Gebali
- Department of Electrical and Computer Engineering, University of Victoria, Victoria, BC V8W 2Y2, Canada
| | | | - Awos Kanan
- Department of Computer Engineering, Princess Sumaya University for Technology, Amman 11941, Jordan
| |
Collapse
|
36
|
Hussain S, Xi X, Ullah I, Inam SA, Naz F, Shaheed K, Ali SA, Tian C. A Discriminative Level Set Method with Deep Supervision for Breast Tumor Segmentation. Comput Biol Med 2022; 149:105995. [DOI: 10.1016/j.compbiomed.2022.105995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 08/05/2022] [Accepted: 08/14/2022] [Indexed: 11/03/2022]
|
37
|
Dias AH, Smith AM, Shah V, Pigg D, Gormsen LC, Munk OL. Clinical validation of a population-based input function for 20-min dynamic whole-body 18F-FDG multiparametric PET imaging. EJNMMI Phys 2022; 9:60. [PMID: 36076097 PMCID: PMC9458803 DOI: 10.1186/s40658-022-00490-y] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 08/29/2022] [Indexed: 11/26/2022] Open
Abstract
Purpose Contemporary PET/CT scanners can use 70-min dynamic whole-body (D-WB) PET to generate more quantitative information about FDG uptake than just the SUV by generating parametric images of FDG metabolic rate (MRFDG). The analysis requires the late (50–70 min) D-WB tissue data combined with the full (0–70 min) arterial input function (AIF). Our aim was to assess whether the use of a scaled population-based input function (sPBIF) obviates the need for the early D-WB PET acquisition and allows for a clinically feasible 20-min D-WB PET examination.
Methods A PBIF was calculated based on AIFs from 20 patients that were D-WB PET scanned for 120 min with simultaneous arterial blood sampling. MRFDG imaging using PBIF requires that the area under the curve (AUC) of the sPBIF is equal to the AUC of the individual patient’s input function because sPBIF AUC bias translates into MRFDG bias. Special patient characteristics could affect the shape of their AIF. Thus, we validated the use of PBIF in 171 patients that were divided into 12 subgroups according to the following characteristics: diabetes, cardiac ejection fraction, blood pressure, weight, eGFR and age. For each patient, the PBIF was scaled to the aorta image-derived input function (IDIF) to calculate a sPBIF, and the AUC bias was calculated. Results We found excellent agreement between the AIF and IDIF at all times. For the clinical validation, the use of sPBIF led to an acceptable AUC bias of 1–5% in most subgroups except for patients with diabetes or patients with low eGFR, where the biases were marginally higher at 7%. Multiparametric MRFDG images based on a short 20-min D-WB PET and sPBIF were visually indistinguishable from images produced by the full 70-min D-WB PET and individual IDIF. Conclusions A short 20-min D-WB PET examination using PBIF can be used for multiparametric imaging without compromising the image quality or precision of MRFDG. The D-WB PET examination may therefore be used in clinical routine for a wide range of patients, potentially allowing for more precise quantification in e.g. treatment response imaging. Supplementary Information The online version contains supplementary material available at 10.1186/s40658-022-00490-y.
Collapse
Affiliation(s)
- André H Dias
- Department of Nuclear Medicine and PET Centre, Aarhus University Hospital, Palle Juul-Jensens Boulevard 165, 8200, Aarhus N, Denmark
| | - Anne M Smith
- Siemens Medical Solutions USA, Inc., Knoxville, TN, USA
| | - Vijay Shah
- Siemens Medical Solutions USA, Inc., Knoxville, TN, USA
| | - David Pigg
- Siemens Medical Solutions USA, Inc., Knoxville, TN, USA
| | - Lars C Gormsen
- Department of Nuclear Medicine and PET Centre, Aarhus University Hospital, Palle Juul-Jensens Boulevard 165, 8200, Aarhus N, Denmark.,Department of Clinical Medicine, Aarhus University, Aarhus N, Denmark
| | - Ole L Munk
- Department of Nuclear Medicine and PET Centre, Aarhus University Hospital, Palle Juul-Jensens Boulevard 165, 8200, Aarhus N, Denmark. .,Department of Clinical Medicine, Aarhus University, Aarhus N, Denmark.
| |
Collapse
|
38
|
Shen X, Wu X, Liu R, Li H, Yin J, Wang L, Ma H. Accurate segmentation of breast tumor in ultrasound images through joint training and refined segmentation. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac8964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Accepted: 08/12/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective. This paper proposes an automatic breast tumor segmentation method for two-dimensional (2D) ultrasound images, which is significantly more accurate, robust, and adaptable than common deep learning models on small datasets. Approach. A generalized joint training and refined segmentation framework (JR) was established, involving a joint training module (J
module
) and a refined segmentation module (R
module
). In J
module
, two segmentation networks are trained simultaneously, under the guidance of the proposed Jocor for Segmentation (JFS) algorithm. In R
module
, the output of J
module
is refined by the proposed area first (AF) algorithm, and marked watershed (MW) algorithm. The AF mainly reduces false positives, which arise easily from the inherent features of breast ultrasound images, in the light of the area, distance, average radical derivative (ARD) and radical gradient index (RGI) of candidate contours. Meanwhile, the MW avoids over-segmentation, and refines segmentation results. To verify its performance, the JR framework was evaluated on three breast ultrasound image datasets. Image dataset A contains 1036 images from local hospitals. Image datasets B and C are two public datasets, containing 562 images and 163 images, respectively. The evaluation was followed by related ablation experiments. Main results. The JR outperformed the other state-of-the-art (SOTA) methods on the three image datasets, especially on image dataset B. Compared with the SOTA methods, the JR improved true positive ratio (TPR) and Jaccard index (JI) by 1.5% and 3.2%, respectively, and reduces (false positive ratio) FPR by 3.7% on image dataset B. The results of the ablation experiments show that each component of the JR matters, and contributes to the segmentation accuracy, particularly in the reduction of false positives. Significance. This study successfully combines traditional segmentation methods with deep learning models. The proposed method can segment small-scale breast ultrasound image datasets efficiently and effectively, with excellent generalization performance.
Collapse
|
39
|
Breast MRI Tumor Automatic Segmentation and Triple-Negative Breast Cancer Discrimination Algorithm Based on Deep Learning. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:2541358. [PMID: 36092784 PMCID: PMC9453096 DOI: 10.1155/2022/2541358] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/19/2022] [Accepted: 08/20/2022] [Indexed: 01/23/2023]
Abstract
Background Breast cancer is a kind of cancer that starts in the epithelial tissue of the breast. Breast cancer has been on the rise in recent years, with a younger generation developing the disease. Magnetic resonance imaging (MRI) plays an important role in breast tumor detection and treatment planning in today's clinical practice. As manual segmentation grows more time-consuming and the observed topic becomes more diversified, automated segmentation becomes more appealing. Methodology. For MRI breast tumor segmentation, we propose a CNN-SVM network. The labels from the trained convolutional neural network are output using a support vector machine in this technique. During the testing phase, the convolutional neural network's labeled output, as well as the test grayscale picture, is passed to the SVM classifier for accurate segmentation. Results We tested on the collected breast tumor dataset and found that our proposed combined CNN-SVM network achieved 0.93, 0.95, and 0.92 on DSC coefficient, PPV, and sensitivity index, respectively. We also compare with the segmentation frameworks of other papers, and the comparison results prove that our CNN-SVM network performs better and can accurately segment breast tumors. Conclusion Our proposed CNN-SVM combined network achieves good segmentation results on the breast tumor dataset. The method can adapt to the differences in breast tumors and segment breast tumors accurately and efficiently. It is of great significance for identifying triple-negative breast cancer in the future.
Collapse
|
40
|
LMA-Net: A lesion morphology aware network for medical image segmentation towards breast tumors. Comput Biol Med 2022; 147:105685. [DOI: 10.1016/j.compbiomed.2022.105685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Revised: 04/20/2022] [Accepted: 05/30/2022] [Indexed: 11/17/2022]
|
41
|
Cheng Z, Li Y, Chen H, Zhang Z, Pan P, Cheng L. DSGMFFN: Deepest semantically guided multi-scale feature fusion network for automated lesion segmentation in ABUS images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106891. [PMID: 35623209 DOI: 10.1016/j.cmpb.2022.106891] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Revised: 05/06/2022] [Accepted: 05/12/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Automated breast ultrasound (ABUS) imaging technology has been widely used in clinical diagnosis. Accurate lesion segmentation in ABUS images is essential in computer-aided diagnosis (CAD) systems. Although deep learning-based approaches have been widely employed in medical image analysis, the large variety of lesions and the imaging interference make ABUS lesion segmentation challenging. METHODS In this paper, we propose a novel deepest semantically guided multi-scale feature fusion network (DSGMFFN) for lesion segmentation in 2D ABUS slices. In order to cope with the large variety of lesions, a deepest semantically guided decoder (DSGNet) and a multi-scale feature fusion model (MFFM) are designed, where the deepest semantics is fully utilized to guide the decoding and feature fusion. That is, the deepest information is given the highest weight in the feature fusion process, and participates in every decoding stage. Aiming at the challenge of imaging interference, a novel mixed attention mechanism is developed, integrating spatial self-attention and channel self-attention to obtain the correlation among pixels and channels to highlight the lesion region. RESULTS The proposed DSGMFFN is evaluated on 3742 slices of 170 ABUS volumes. The experimental result indicates that DSGMFFN achieves 84.54% and 73.24% in Dice similarity coefficient (DSC) and intersection over union (IoU), respectively. CONCLUSIONS The proposed method shows better performance than the state-of-the-art methods in ABUS lesion segmentation. Incorrect segmentation caused by lesion variety and imaging interference in ABUS images can be alleviated.
Collapse
Affiliation(s)
- Zhanyi Cheng
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China
| | - Yanfeng Li
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China.
| | - Houjin Chen
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China
| | - Zilu Zhang
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China
| | - Pan Pan
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China
| | - Lin Cheng
- Center for Breast, People's Hospital of Peking University, Beijing, China
| |
Collapse
|
42
|
Bhowmik A, Eskreis-Winkler S. Deep learning in breast imaging. BJR Open 2022; 4:20210060. [PMID: 36105427 PMCID: PMC9459862 DOI: 10.1259/bjro.20210060] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 04/04/2022] [Accepted: 04/21/2022] [Indexed: 11/22/2022] Open
Abstract
Millions of breast imaging exams are performed each year in an effort to reduce the morbidity and mortality of breast cancer. Breast imaging exams are performed for cancer screening, diagnostic work-up of suspicious findings, evaluating extent of disease in recently diagnosed breast cancer patients, and determining treatment response. Yet, the interpretation of breast imaging can be subjective, tedious, time-consuming, and prone to human error. Retrospective and small reader studies suggest that deep learning (DL) has great potential to perform medical imaging tasks at or above human-level performance, and may be used to automate aspects of the breast cancer screening process, improve cancer detection rates, decrease unnecessary callbacks and biopsies, optimize patient risk assessment, and open up new possibilities for disease prognostication. Prospective trials are urgently needed to validate these proposed tools, paving the way for real-world clinical use. New regulatory frameworks must also be developed to address the unique ethical, medicolegal, and quality control issues that DL algorithms present. In this article, we review the basics of DL, describe recent DL breast imaging applications including cancer detection and risk prediction, and discuss the challenges and future directions of artificial intelligence-based systems in the field of breast cancer.
Collapse
Affiliation(s)
- Arka Bhowmik
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Sarah Eskreis-Winkler
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| |
Collapse
|
43
|
Eidex Z, Wang T, Lei Y, Axente M, Akin-Akintayo OO, Ojo OAA, Akintayo AA, Roper J, Bradley JD, Liu T, Schuster DM, Yang X. MRI-based prostate and dominant lesion segmentation using cascaded scoring convolutional neural network. Med Phys 2022; 49:5216-5224. [PMID: 35533237 PMCID: PMC9388615 DOI: 10.1002/mp.15687] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 03/18/2022] [Accepted: 04/16/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Dose escalation to dominant intraprostatic lesions (DILs) is a novel treatment strategy to improve the treatment outcome of prostate radiation therapy. Treatment planning requires accurate and fast delineation of the prostate and DILs. In this study, a 3D cascaded scoring convolutional neural network is proposed to automatically segment the prostate and DILs from MRI. METHODS AND MATERIALS The proposed cascaded scoring convolutional neural network performs end-to-end segmentation by locating a region-of-interest (ROI), identifying the object within the ROI, and defining the target. A scoring strategy, which is learned to judge the segmentation quality of DIL, is integrated into cascaded convolutional neural network to solve the challenge of segmenting the irregular shapes of the DIL. To evaluate the proposed method, 77 patients who underwent MRI and PET/CT were retrospectively investigated. The prostate and DIL ground truth contours were delineated by experienced radiologists. The proposed method was evaluated with five-fold cross validation and holdout testing. RESULTS The average centroid distance, volume difference, and Dice similarity coefficient (DSC) value for prostate/DIL are 4.3±7.5mm/3.73±3.78mm, 4.5±7.9cc/0.41±0.59cc and 89.6±8.9%/84.3±11.9%, respectively. Comparable results were obtained in the holdout test. Similar or superior segmentation outcomes were seen when compared the results of the proposed method to those of competing segmentation approaches CONCLUSIONS: : The proposed automatic segmentation method can accurately and simultaneously segment both the prostate and DILs. The intended future use for this algorithm is focal boost prostate radiation therapy. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology, Emory University, Atlanta, GA.,School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA
| | - Marian Axente
- Department of Radiation Oncology, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | | | | | | | - Justin Roper
- Department of Radiation Oncology, Emory University, Atlanta, GA.,School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - Jeffery D Bradley
- Department of Radiation Oncology, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA.,School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| |
Collapse
|
44
|
Wang Q, Chen H, Luo G, Li B, Shang H, Shao H, Sun S, Wang Z, Wang K, Cheng W. Performance of novel deep learning network with the incorporation of the automatic segmentation network for diagnosis of breast cancer in automated breast ultrasound. Eur Radiol 2022; 32:7163-7172. [PMID: 35488916 DOI: 10.1007/s00330-022-08836-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 04/15/2022] [Accepted: 04/21/2022] [Indexed: 11/29/2022]
Abstract
OBJECTIVE To develop novel deep learning network (DLN) with the incorporation of the automatic segmentation network (ASN) for morphological analysis and determined the performance for diagnosis breast cancer in automated breast ultrasound (ABUS). METHODS A total of 769 breast tumors were enrolled in this study and were randomly divided into training set and test set at 600 vs. 169. The novel DLNs (Resent v2, ResNet50 v2, ResNet101 v2) added a new ASN to the traditional ResNet networks and extracted morphological information of breast tumors. The accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), area under the receiver operating characteristic (ROC) curve (AUC), and average precision (AP) were calculated. The diagnostic performances of novel DLNs were compared with those of two radiologists with different experience. RESULTS The ResNet34 v2 model had higher specificity (76.81%) and PPV (82.22%) than the other two, the ResNet50 v2 model had higher accuracy (78.11%) and NPV (72.86%), and the ResNet101 v2 model had higher sensitivity (85.00%). According to the AUCs and APs, the novel ResNet101 v2 model produced the best result (AUC 0.85 and AP 0.90) compared with the remaining five DLNs. Compared with the novice radiologist, the novel DLNs performed better. The F1 score was increased from 0.77 to 0.78, 0.81, and 0.82 by three novel DLNs. However, their diagnostic performance was worse than that of the experienced radiologist. CONCLUSIONS The novel DLNs performed better than traditional DLNs and may be helpful for novice radiologists to improve their diagnostic performance of breast cancer in ABUS. KEY POINTS • A novel automatic segmentation network to extract morphological information was successfully developed and implemented with ResNet deep learning networks. • The novel deep learning networks in our research performed better than the traditional deep learning networks in the diagnosis of breast cancer using ABUS images. • The novel deep learning networks in our research may be useful for novice radiologists to improve diagnostic performance.
Collapse
Affiliation(s)
- Qiucheng Wang
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - He Chen
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Gongning Luo
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin, Heilongjiang Province, China
| | - Bo Li
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Haitao Shang
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Hua Shao
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Shanshan Sun
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Zhongshuai Wang
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin, Heilongjiang Province, China
| | - Kuanquan Wang
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin, Heilongjiang Province, China
| | - Wen Cheng
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China.
| |
Collapse
|
45
|
Balkenende L, Teuwen J, Mann RM. Application of Deep Learning in Breast Cancer Imaging. Semin Nucl Med 2022; 52:584-596. [PMID: 35339259 DOI: 10.1053/j.semnuclmed.2022.02.003] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 02/15/2022] [Accepted: 02/16/2022] [Indexed: 11/11/2022]
Abstract
This review gives an overview of the current state of deep learning research in breast cancer imaging. Breast imaging plays a major role in detecting breast cancer at an earlier stage, as well as monitoring and evaluating breast cancer during treatment. The most commonly used modalities for breast imaging are digital mammography, digital breast tomosynthesis, ultrasound and magnetic resonance imaging. Nuclear medicine imaging techniques are used for detection and classification of axillary lymph nodes and distant staging in breast cancer imaging. All of these techniques are currently digitized, enabling the possibility to implement deep learning (DL), a subset of Artificial intelligence, in breast imaging. DL is nowadays embedded in a plethora of different tasks, such as lesion classification and segmentation, image reconstruction and generation, cancer risk prediction, and prediction and assessment of therapy response. Studies show similar and even better performances of DL algorithms compared to radiologists, although it is clear that large trials are needed, especially for ultrasound and magnetic resonance imaging, to exactly determine the added value of DL in breast cancer imaging. Studies on DL in nuclear medicine techniques are only sparsely available and further research is mandatory. Legal and ethical issues need to be considered before the role of DL can expand to its full potential in clinical breast care practice.
Collapse
Affiliation(s)
- Luuk Balkenende
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jonas Teuwen
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; Department of Radiation Oncology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands
| | - Ritse M Mann
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands.
| |
Collapse
|
46
|
Zhu X, Wu Y, Hu H, Zhuang X, Yao J, Ou D, Li W, Song M, Feng N, Xu D. Medical lesion segmentation by combining multi‐modal images with modality weighted UNet. Med Phys 2022; 49:3692-3704. [PMID: 35312077 DOI: 10.1002/mp.15610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 02/25/2022] [Accepted: 03/04/2022] [Indexed: 11/09/2022] Open
Affiliation(s)
- Xiner Zhu
- College of Information Science and Electronic Engineering Zhejiang University Hangzhou China
| | - Yichao Wu
- College of Information Science and Electronic Engineering Zhejiang University Hangzhou China
| | - Haoji Hu
- College of Information Science and Electronic Engineering Zhejiang University Hangzhou China
| | - Xianwei Zhuang
- College of Information Science and Electronic Engineering Zhejiang University Hangzhou China
| | - Jincao Yao
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Di Ou
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Wei Li
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Mei Song
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Na Feng
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Dong Xu
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| |
Collapse
|
47
|
Shah SM, Khan RA, Arif S, Sajid U. Artificial intelligence for breast cancer analysis: Trends & directions. Comput Biol Med 2022; 142:105221. [PMID: 35016100 DOI: 10.1016/j.compbiomed.2022.105221] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Revised: 01/03/2022] [Accepted: 01/03/2022] [Indexed: 12/18/2022]
Abstract
Breast cancer is one of the leading causes of death among women. Early detection of breast cancer can significantly improve the lives of millions of women across the globe. Given importance of finding solution/framework for early detection and diagnosis, recently many AI researchers are focusing to automate this task. The other reasons for surge in research activities in this direction are advent of robust AI algorithms (deep learning), availability of hardware that can run/train those robust and complex AI algorithms and accessibility of large enough dataset required for training AI algorithms. Different imaging modalities that have been exploited by researchers to automate the task of breast cancer detection are mammograms, ultrasound, magnetic resonance imaging, histopathological images or any combination of them. This article analyzes these imaging modalities and presents their strengths and limitations. It also enlists resources from where their datasets can be accessed for research purpose. This article then summarizes AI and computer vision based state-of-the-art methods proposed in the last decade to detect breast cancer using various imaging modalities. Primarily, in this article we have focused on reviewing frameworks that have reported results using mammograms as it is the most widely used breast imaging modality that serves as the first test that medical practitioners usually prescribe for the detection of breast cancer. Another reason for focusing on mammogram imaging modalities is the availability of its labelled datasets. Datasets availability is one of the most important aspects for the development of AI based frameworks as such algorithms are data hungry and generally quality of dataset affects performance of AI based algorithms. In a nutshell, this research article will act as a primary resource for the research community working in the field of automated breast imaging analysis.
Collapse
Affiliation(s)
- Shahid Munir Shah
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Rizwan Ahmed Khan
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan.
| | - Sheeraz Arif
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Unaiza Sajid
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| |
Collapse
|
48
|
Matkovic LA, Wang T, Lei Y, Akin-Akintayo OO, Ojo OAA, Akintayo AA, Roper J, Bradley JD, Liu T, Schuster DM, Yang X. Prostate and dominant intraprostatic lesion segmentation on PET/CT using cascaded regional-net. Phys Med Biol 2021; 66:10.1088/1361-6560/ac3c13. [PMID: 34808603 PMCID: PMC8725511 DOI: 10.1088/1361-6560/ac3c13] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Accepted: 11/22/2021] [Indexed: 12/22/2022]
Abstract
Focal boost to dominant intraprostatic lesions (DILs) has recently been proposed for prostate radiation therapy. Accurate and fast delineation of the prostate and DILs is thus required during treatment planning. In this paper, we develop a learning-based method using positron emission tomography (PET)/computed tomography (CT) images to automatically segment the prostate and its DILs. To enable end-to-end segmentation, a deep learning-based method, called cascaded regional-Net, is utilized. The first network, referred to as dual attention network, is used to segment the prostate via extracting comprehensive features from both PET and CT images. A second network, referred to as mask scoring regional convolutional neural network (MSR-CNN), is used to segment the DILs from the PET and CT within the prostate region. Scoring strategy is used to diminish the misclassification of the DILs. For DIL segmentation, the proposed cascaded regional-Net uses two steps to remove normal tissue regions, with the first step cropping images based on prostate segmentation and the second step using MSR-CNN to further locate the DILs. The binary masks of DILs and prostates of testing patients are generated on the PET/CT images by the trained model. For evaluation, we retrospectively investigated 49 prostate cancer patients with PET/CT images acquired. The prostate and DILs of each patient were contoured by radiation oncologists and set as the ground truths and targets. We used five-fold cross-validation and a hold-out test to train and evaluate our method. The mean surface distance and DSC values were 0.666 ± 0.696 mm and 0.932 ± 0.059 for the prostate and 0.814 ± 1.002 mm and 0.801 ± 0.178 for the DILs among all 49 patients. The proposed method has shown promise for facilitating prostate and DIL delineation for DIL focal boost prostate radiation therapy.
Collapse
Affiliation(s)
- Luke A. Matkovic
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of
Technology, Atlanta, GA
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Yang Lei
- Department of Radiation Oncology, Emory University,
Atlanta, GA
| | | | | | | | - Justin Roper
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of
Technology, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Jeffery D. Bradley
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Tian Liu
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - David M. Schuster
- Department of Radiology and Imaging Sciences, Emory
University, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of
Technology, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| |
Collapse
|
49
|
Rahman A, Rahman M, Kundu D, Karim MR, Band SS, Sookhak M. Study on IoT for SARS-CoV-2 with healthcare: present and future perspective. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2021; 18:9697-9726. [PMID: 34814364 DOI: 10.3934/mbe.2021475] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The ever-evolving and contagious nature of the Coronavirus (COVID-19) has immobilized the world around us. As the daily number of infected cases increases, the containment of the spread of this virus is proving to be an overwhelming task. Healthcare facilities around the world are overburdened with an ominous responsibility to combat an ever-worsening scenario. To aid the healthcare system, Internet of Things (IoT) technology provides a better solution-tracing, testing of COVID patients efficiently is gaining rapid pace. This study discusses the role of IoT technology in healthcare during the SARS-CoV-2 pandemics. The study overviews different research, platforms, services, products where IoT is used to combat the COVID-19 pandemic. Further, we intelligently integrate IoT and healthcare for COVID-19 related applications. Again, we focus on a wide range of IoT applications in regards to SARS-CoV-2 tracing, testing, and treatment. Finally, we effectively consider further challenges, issues, and some direction regarding IoT in order to uplift the healthcare system during COVID-19 and future pandemics.
Collapse
Affiliation(s)
- Anichur Rahman
- Department of Computer Science and Engineering, National Institute of Textile Engineering and Research (NITER), Constituent Institute of Dhaka University, Savar, Dhaka-1350, Bangladesh
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Muaz Rahman
- Department of Electrical and Electronic Engineering, National Institute of Textile Engineering and Research (NITER), Constituent Institute of Dhaka University, Savar, Dhaka-1350, Bangladesh
| | - Dipanjali Kundu
- Department of Computer Science and Engineering, National Institute of Textile Engineering and Research (NITER), Constituent Institute of Dhaka University, Savar, Dhaka-1350, Bangladesh
| | - Md Razaul Karim
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Shahab S Band
- Future Technology Research Center, College of Future, National Yunlin University of Science and Technology, 123 University Road, Section 3, Douliou, Yunlin 64002, Taiwan
| | - Mehdi Sookhak
- Dept. of Computer Science, Texas A & M University-Corpus Christi, 6300 Ocean Drive, Corpus Christi, Texas, USA, 78412
| |
Collapse
|
50
|
Momin S, Lei Y, Tian Z, Wang T, Roper J, Kesarwala AH, Higgins K, Bradley JD, Liu T, Yang X. Lung tumor segmentation in 4D CT images using motion convolutional neural networks. Med Phys 2021; 48:7141-7153. [PMID: 34469001 PMCID: PMC11700498 DOI: 10.1002/mp.15204] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 08/19/2021] [Accepted: 08/25/2021] [Indexed: 01/01/2023] Open
Abstract
PURPOSE Manual delineation on all breathing phases of lung cancer 4D CT image datasets can be challenging, exhaustive, and prone to subjective errors because of both the large number of images in the datasets and variations in the spatial location of tumors secondary to respiratory motion. The purpose of this work is to present a new deep learning-based framework for fast and accurate segmentation of lung tumors on 4D CT image sets. METHODS The proposed DL framework leverages motion region convolutional neural network (R-CNN). Through integration of global and local motion estimation network architectures, the network can learn both major and minor changes caused by tumor motion. Our network design first extracts tumor motion information by feeding 4D CT images with consecutive phases into an integrated backbone network architecture, locating volume-of-interest (VOIs) via a regional proposal network and removing irrelevant information via a regional convolutional neural network. Extracted motion information is then advanced into the subsequent global and local motion head network architecture to predict corresponding deformation vector fields (DVFs) and further adjust tumor VOIs. Binary masks of tumors are then segmented within adjusted VOIs via a mask head. A self-attention strategy is incorporated in the mask head network to remove any noisy features that might impact segmentation performance. We performed two sets of experiments. In the first experiment, a five-fold cross-validation on 20 4D CT datasets, each consisting of 10 breathing phases (i.e., 200 3D image volumes in total). The network performance was also evaluated on an additional unseen 200 3D images volumes from 20 hold-out 4D CT datasets. In the second experiment, we trained another model with 40 patients' 4D CT datasets from experiment 1 and evaluated on additional unseen nine patients' 4D CT datasets. The Dice similarity coefficient (DSC), center of mass distance (CMD), 95th percentile Hausdorff distance (HD95 ), mean surface distance (MSD), and volume difference (VD) between the manual and segmented tumor contour were computed to evaluate tumor detection and segmentation accuracy. The performance of our method was quantitatively evaluated against four different methods (VoxelMorph, U-Net, network without global and local networks, and network without attention gate strategy) across all evaluation metrics through a paired t-test. RESULTS The proposed fully automated DL method yielded good overall agreement with the ground truth for contoured tumor volume and segmentation accuracy. Our model yielded significantly better values of evaluation metrics (p < 0.05) than all four competing methods in both experiments. On hold-out datasets of experiment 1 and 2, our method yielded DSC of 0.86 and 0.90 compared to 0.82 and 0.87, 0.75 and 0.83, 081 and 0.89, and 0.81 and 0.89 yielded by VoxelMorph, U-Net, network without global and local networks, and networks without attention gate strategy. Tumor VD between ground truth and our method was the smallest with the value of 0.50 compared to 0.99, 1.01, 0.92, and 0.93 for between ground truth and VoxelMorph, U-Net, network without global and local networks, and networks without attention gate strategy, respectively. CONCLUSIONS Our proposed DL framework of tumor segmentation on lung cancer 4D CT datasets demonstrates a significant promise for fully automated delineation. The promising results of this work provide impetus for its integration into the 4D CT treatment planning workflow to improve the accuracy and efficiency of lung radiotherapy.
Collapse
Affiliation(s)
- Shadab Momin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Zhen Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Aparna H Kesarwala
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|