1
|
Wang X, Niu Y, Liu H, Tian F, Zhang Q, Wang Y, Wang Y, Li Y. ThyroNet-X4 genesis: an advanced deep learning model for auxiliary diagnosis of thyroid nodules' malignancy. Sci Rep 2025; 15:4214. [PMID: 39905156 PMCID: PMC11794870 DOI: 10.1038/s41598-025-86819-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2024] [Accepted: 01/14/2025] [Indexed: 02/06/2025] Open
Abstract
Thyroid nodules are a common endocrine condition, and accurate differentiation between benign and malignant nodules is essential for making appropriate treatment decisions. Traditional ultrasound-based diagnoses often depend on the expertise of physicians, which introduces a risk of misdiagnosis. To address this challenge, this study proposes a novel deep learning model, ThyroNet-X4 Genesis, designed to automatically classify thyroid nodules as benign or malignant. Built on the ResNet architecture, the model enhances feature extraction by incorporating grouped convolutions and using larger convolution kernels, improving its ability to analyze thyroid ultrasound images. The model was trained and validated using publicly available thyroid ultrasound imaging datasets, and its generalization was further tested using an external validation dataset from HanZhong Central Hospital. The ThyroNet-X4 Genesis model achieved 85.55% and 71.70% accuracy on the internal training and validation sets, respectively, and 67.02% accuracy on the external validation set. These results surpass those of other mainstream models, highlighting its potential for clinical use in thyroid nodule classification. This work underscores the growing role of deep learning in thyroid nodule diagnosis and provides a foundation for future research in high-performance medical diagnostic models.
Collapse
Affiliation(s)
- Xiaoxue Wang
- HanZhong Central Hospital, HanZhong, 723000, China
| | - Yupeng Niu
- College of Information Engineering, Sichuan Agricultural University, Ya'an, 625000, China
| | - Hongli Liu
- HanZhong Central Hospital, HanZhong, 723000, China
| | - Fa Tian
- College of Information Engineering, Sichuan Agricultural University, Ya'an, 625000, China
| | - Qiang Zhang
- HanZhong Central Hospital, HanZhong, 723000, China
| | - Yimeng Wang
- HanZhong Central Hospital, HanZhong, 723000, China
| | - Yeju Wang
- HanZhong Central Hospital, HanZhong, 723000, China
| | - Yijia Li
- HanZhong Central Hospital, HanZhong, 723000, China.
| |
Collapse
|
2
|
Wu M, Yan C, Sen G. Computer-aided diagnosis of hepatic cystic echinococcosis based on deep transfer learning features from ultrasound images. Sci Rep 2025; 15:607. [PMID: 39753933 PMCID: PMC11698856 DOI: 10.1038/s41598-024-85004-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2024] [Accepted: 12/30/2024] [Indexed: 01/06/2025] Open
Abstract
Hepatic cystic echinococcosis (HCE), a life-threatening liver disease, has 5 subtypes, i.e., single-cystic, polycystic, internal capsule collapse, solid mass, and calcified subtypes. And each subtype has different treatment methods. An accurate diagnosis is the prerequisite for effective HCE treatment. However, clinicians with less diagnostic experience often make misdiagnoses of HCE and confuse its 5 subtypes in clinical practice. Computer-aided diagnosis (CAD) techniques can help clinicians to improve their diagnostic performance. This paper aims to propose an efficient CAD system that automatically differentiates 5 subtypes of HCE from the ultrasound images. The proposed CAD system adopts the concept of deep transfer learning and uses a pre-trained convolutional neural network (CNN) named VGG19 to extract deep CNN features from the ultrasound images. The proven classifier models, k - nearest neighbor (KNN) and support vecter machine (SVM) models, are integrated to classify the extracted deep CNN features. 3 distinct experiments with the same deep CNN features but different classifier models (softmax, KNN, SVM) are performed. The experiments followed 10 runs of the five-fold cross-validation process on a total of 1820 ultrasound images and the results were compared using Wilcoxon signed-rank test. The overall classification accuracy from low to high was 90.46 ± 1.59% for KNN classifier, 90.92 ± 2.49% for transfer learned VGG19, and 92.01 ± 1.48% for SVM, indicating SVM classifiers with deep CNN features achieved the best performance (P < 0.05). Other performance measures used in the study are specificity, sensitivity, precision, F1-score, and area under the curve (AUC). In addition, the paper addresses a practical aspect by evaluating the system with smaller training data to demonstrate the capability of the proposed classification system. The observations of the study imply that transfer learning is a useful technique when the availability of medical images is limited. The proposed classification system by using deep CNN features and SVM classifier is potentially helpful for clinicians to improve their HCE diagnostic performance in clinical practice.
Collapse
Affiliation(s)
- Miao Wu
- College of Medical Engineering and Technology, Xinjiang Medical University, Urumqi, 830017, China.
| | - Chuanbo Yan
- College of Medical Engineering and Technology, Xinjiang Medical University, Urumqi, 830017, China
| | - Gan Sen
- College of Medical Engineering and Technology, Xinjiang Medical University, Urumqi, 830017, China
| |
Collapse
|
3
|
Biesok M, Juszczyk J, Badura P. Breast tumor segmentation in ultrasound using distance-adapted fuzzy connectedness, convolutional neural network, and active contour. Sci Rep 2024; 14:25859. [PMID: 39468220 PMCID: PMC11519628 DOI: 10.1038/s41598-024-76308-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Accepted: 10/14/2024] [Indexed: 10/30/2024] Open
Abstract
This study addresses computer-aided breast cancer diagnosis through a hybrid framework for breast tumor segmentation in ultrasound images. The core of the three-stage method is based on the autoencoder convolutional neural network. In the first stage, we prepare a hybrid pseudo-color image through multiple instances of fuzzy connectedness analysis with a novel distance-adapted fuzzy affinity. We produce different weight combinations to determine connectivity maps driven by particular image specifics. After the hybrid image is processed by the deep network, we adjust the segmentation outcome with the Chan-Vese active contour model. We find the idea of incorporating fuzzy connectedness into the input data preparation for deep-learning image analysis our main contribution to the study. The method is trained and validated using a combined dataset of 993 breast ultrasound images from three public collections frequently used in recent studies on breast tumor segmentation. The experiments address essential settings and hyperparameters of the method, e.g., the network architecture, input image size, and active contour setup. The tumor segmentation reaches a median Dice index of 0.86 (mean at 0.79) over the combined database. We refer our results to the most recent state-of-the-art from 2022-2023 using the same datasets, finding our model comparable in segmentation performance.
Collapse
Affiliation(s)
- Marta Biesok
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800, Zabrze, Poland.
| | - Jan Juszczyk
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800, Zabrze, Poland
| | - Pawel Badura
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800, Zabrze, Poland
| |
Collapse
|
4
|
Moral P, Mustafi D, Mustafi A, Sahana SK. CystNet: An AI driven model for PCOS detection using multilevel thresholding of ultrasound images. Sci Rep 2024; 14:25012. [PMID: 39443622 PMCID: PMC11499604 DOI: 10.1038/s41598-024-75964-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Accepted: 10/09/2024] [Indexed: 10/25/2024] Open
Abstract
Polycystic Ovary Syndrome (PCOS) is a widespread endocrinological dysfunction impacting women of reproductive age, categorized by excess androgens and a variety of associated syndromes, consisting of acne, alopecia, and hirsutism. It involves the presence of multiple immature follicles in the ovaries, which can disrupt normal ovulation and lead to hormonal imbalances and associated health complications. Routine diagnostic methods rely on manual interpretation of ultrasound (US) images and clinical assessments, which are time-consuming and prone to errors. Therefore, implementing an automated system is essential for streamlining the diagnostic process and enhancing accuracy. By automatically analyzing follicle characteristics and other relevant features, this research aims to facilitate timely intervention and reduce the burden on healthcare professionals. The present study proposes an advanced automated system for detecting and classifying PCOS from ultrasound images. Leveraging Artificial Intelligence (AI) based techniques, the system examines affected and unaffected cases to enhance diagnostic accuracy. The pre-processing of input images incorporates techniques such as image resizing, normalization, augmentation, Watershed technique, multilevel thresholding, etc. approaches for precise image segmentation. Feature extraction is facilitated by the proposed CystNet technique, followed by PCOS classification utilizing both fully connected layers with 5-fold cross-validation and traditional machine learning classifiers. The performance of the model is rigorously evaluated using a comprehensive range of metrics, incorporating AUC score, accuracy, specificity, precision, F1-score, recall, and loss, along with a detailed confusion matrix analysis. The model demonstrated a commendable accuracy of [Formula: see text] when utilizing a fully connected classification layer, as determined by a thorough 5-fold cross-validation process. Additionally, it has achieved an accuracy of [Formula: see text] when employing an ensemble ML classifier. This proposed approach could be suggested for predicting PCOS or similar diseases using datasets that exhibit multimodal characteristics.
Collapse
Affiliation(s)
- Poonam Moral
- Department of Computer Science and Engineering, Birla Institute of Technology, Mesra, Ranchi, 835215, India.
| | - Debjani Mustafi
- Department of Computer Science and Engineering, Birla Institute of Technology, Mesra, Ranchi, 835215, India
| | - Abhijit Mustafi
- Department of Computer Science and Engineering, Birla Institute of Technology, Mesra, Ranchi, 835215, India
| | - Sudip Kumar Sahana
- Department of Computer Science and Engineering, Birla Institute of Technology, Mesra, Ranchi, 835215, India
| |
Collapse
|
5
|
Sha M. Segmentation of ovarian cyst in ultrasound images using AdaResU-net with optimization algorithm and deep learning model. Sci Rep 2024; 14:18868. [PMID: 39143122 PMCID: PMC11325020 DOI: 10.1038/s41598-024-69427-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2024] [Accepted: 08/05/2024] [Indexed: 08/16/2024] Open
Abstract
Ovarian cysts pose significant health risks including torsion, infertility, and cancer, necessitating rapid and accurate diagnosis. Ultrasonography is commonly employed for screening, yet its effectiveness is hindered by challenges like weak contrast, speckle noise, and hazy boundaries in images. This study proposes an adaptive deep learning-based segmentation technique using a database of ovarian ultrasound cyst images. A Guided Trilateral Filter (GTF) is applied for noise reduction in pre-processing. Segmentation utilizes an Adaptive Convolutional Neural Network (AdaResU-net) for precise cyst size identification and benign/malignant classification, optimized via the Wild Horse Optimization (WHO) algorithm. Objective functions Dice Loss Coefficient and Weighted Cross-Entropy are optimized to enhance segmentation accuracy. Classification of cyst types is performed using a Pyramidal Dilated Convolutional (PDC) network. The method achieves a segmentation accuracy of 98.87%, surpassing existing techniques, thereby promising improved diagnostic accuracy and patient care outcomes.
Collapse
Affiliation(s)
- Mohemmed Sha
- Department of Software Engineering, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, 16278, Al-Kharj, Saudi Arabia.
| |
Collapse
|
6
|
Ru J, Zhu Z, Shi J. Spatial and geometric learning for classification of breast tumors from multi-center ultrasound images: a hybrid learning approach. BMC Med Imaging 2024; 24:133. [PMID: 38840240 PMCID: PMC11155188 DOI: 10.1186/s12880-024-01307-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Accepted: 05/27/2024] [Indexed: 06/07/2024] Open
Abstract
BACKGROUND Breast cancer is the most common cancer among women, and ultrasound is a usual tool for early screening. Nowadays, deep learning technique is applied as an auxiliary tool to provide the predictive results for doctors to decide whether to make further examinations or treatments. This study aimed to develop a hybrid learning approach for breast ultrasound classification by extracting more potential features from local and multi-center ultrasound data. METHODS We proposed a hybrid learning approach to classify the breast tumors into benign and malignant. Three multi-center datasets (BUSI, BUS, OASBUD) were used to pretrain a model by federated learning, then every dataset was fine-tuned at local. The proposed model consisted of a convolutional neural network (CNN) and a graph neural network (GNN), aiming to extract features from images at a spatial level and from graphs at a geometric level. The input images are small-sized and free from pixel-level labels, and the input graphs are generated automatically in an unsupervised manner, which saves the costs of labor and memory space. RESULTS The classification AUCROC of our proposed method is 0.911, 0.871 and 0.767 for BUSI, BUS and OASBUD. The balanced accuracy is 87.6%, 85.2% and 61.4% respectively. The results show that our method outperforms conventional methods. CONCLUSIONS Our hybrid approach can learn the inter-feature among multi-center data and the intra-feature of local data. It shows potential in aiding doctors for breast tumor classification in ultrasound at an early stage.
Collapse
Affiliation(s)
- Jintao Ru
- Department of Medical Engineering, Shaoxing Hospital of Traditional Chinese Medicine, Shaoxing, Zhejiang, People's Republic of China.
| | - Zili Zhu
- Department of Radiology, The First Affiliated Hospital of Ningbo University, Ningbo, Zhejiang, People's Republic of China
| | - Jialin Shi
- Rehabilitation Medicine Institute, Zhejiang Rehabilitation Medical Center, Hangzhou, Zhejiang, People's Republic of China
| |
Collapse
|
7
|
Liang B, Peng F, Luo D, Zeng Q, Wen H, Zheng B, Zou Z, An L, Wen H, Wen X, Liao Y, Yuan Y, Li S. Automatic segmentation of 15 critical anatomical labels and measurements of cardiac axis and cardiothoracic ratio in fetal four chambers using nnU-NetV2. BMC Med Inform Decis Mak 2024; 24:128. [PMID: 38773456 PMCID: PMC11106923 DOI: 10.1186/s12911-024-02527-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Accepted: 05/02/2024] [Indexed: 05/23/2024] Open
Abstract
BACKGROUND Accurate segmentation of critical anatomical structures in fetal four-chamber view images is essential for the early detection of congenital heart defects. Current prenatal screening methods rely on manual measurements, which are time-consuming and prone to inter-observer variability. This study develops an AI-based model using the state-of-the-art nnU-NetV2 architecture for automatic segmentation and measurement of key anatomical structures in fetal four-chamber view images. METHODS A dataset, consisting of 1,083 high-quality fetal four-chamber view images, was annotated with 15 critical anatomical labels and divided into training/validation (867 images) and test (216 images) sets. An AI-based model using the nnU-NetV2 architecture was trained on the annotated images and evaluated using the mean Dice coefficient (mDice) and mean intersection over union (mIoU) metrics. The model's performance in automatically computing the cardiac axis (CAx) and cardiothoracic ratio (CTR) was compared with measurements from sonographers with varying levels of experience. RESULTS The AI-based model achieved a mDice coefficient of 87.11% and an mIoU of 77.68% for the segmentation of critical anatomical structures. The model's automated CAx and CTR measurements showed strong agreement with those of experienced sonographers, with respective intraclass correlation coefficients (ICCs) of 0.83 and 0.81. Bland-Altman analysis further confirmed the high agreement between the model and experienced sonographers. CONCLUSION We developed an AI-based model using the nnU-NetV2 architecture for accurate segmentation and automated measurement of critical anatomical structures in fetal four-chamber view images. Our model demonstrated high segmentation accuracy and strong agreement with experienced sonographers in computing clinically relevant parameters. This approach has the potential to improve the efficiency and reliability of prenatal cardiac screening, ultimately contributing to the early detection of congenital heart defects.
Collapse
Affiliation(s)
- Bocheng Liang
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Fengfeng Peng
- Department of Computer Science and Electronic Engineering, Hunan University, Changsha, 410082, China
| | - Dandan Luo
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Qing Zeng
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Huaxuan Wen
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Bowen Zheng
- Department of Computer Science and Electronic Engineering, Hunan University, Changsha, 410082, China
| | - Zhiying Zou
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Liting An
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Huiying Wen
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Xin Wen
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Yimei Liao
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Ying Yuan
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China
| | - Shengli Li
- Department of Ultrasound, Shenzhen Maternity&Child Healthcare Hospital, Shenzhen, 518028, China.
| |
Collapse
|
8
|
Gómez-Flores W, Gregorio-Calas MJ, Coelho de Albuquerque Pereira W. BUS-BRA: A breast ultrasound dataset for assessing computer-aided diagnosis systems. Med Phys 2024; 51:3110-3123. [PMID: 37937827 DOI: 10.1002/mp.16812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 10/10/2023] [Accepted: 10/12/2023] [Indexed: 11/09/2023] Open
Abstract
PURPOSE Computer-aided diagnosis (CAD) systems on breast ultrasound (BUS) aim to increase the efficiency and effectiveness of breast screening, helping specialists to detect and classify breast lesions. CAD system development requires a set of annotated images, including lesion segmentation, biopsy results to specify benign and malignant cases, and BI-RADS categories to indicate the likelihood of malignancy. Besides, standardized partitions of training, validation, and test sets promote reproducibility and fair comparisons between different approaches. Thus, we present a publicly available BUS dataset whose novelty is the substantial increment of cases with the above-mentioned annotations and the inclusion of standardized partitions to objectively assess and compare CAD systems. ACQUISITION AND VALIDATION METHODS The BUS dataset comprises 1875 anonymized images from 1064 female patients acquired via four ultrasound scanners during systematic studies at the National Institute of Cancer (Rio de Janeiro, Brazil). The dataset includes biopsy-proven tumors divided into 722 benign and 342 malignant cases. Besides, a senior ultrasonographer performed a BI-RADS assessment in categories 2 to 5. Additionally, the ultrasonographer manually outlined the breast lesions to obtain ground truth segmentations. Furthermore, 5- and 10-fold cross-validation partitions are provided to standardize the training and test sets to evaluate and reproduce CAD systems. Finally, to validate the utility of the BUS dataset, an evaluation framework is implemented to assess the performance of deep neural networks for segmenting and classifying breast lesions. DATA FORMAT AND USAGE NOTES The BUS dataset is publicly available for academic and research purposes through an open-access repository under the name BUS-BRA: A Breast Ultrasound Dataset for Assessing CAD Systems. BUS images and reference segmentations are saved in Portable Network Graphic (PNG) format files, and the dataset information is stored in separate Comma-Separated Value (CSV) files. POTENTIAL APPLICATIONS The BUS-BRA dataset can be used to develop and assess artificial intelligence-based lesion detection and segmentation methods, and the classification of BUS images into pathological classes and BI-RADS categories. Other potential applications include developing image processing methods like despeckle filtering and contrast enhancement methods to improve image quality and feature engineering for image description.
Collapse
Affiliation(s)
- Wilfrido Gómez-Flores
- Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional, Tamaulipas, Mexico
| | | | | |
Collapse
|
9
|
Yadav N, Dass R, Virmani J. Assessment of encoder-decoder-based segmentation models for thyroid ultrasound images. Med Biol Eng Comput 2023:10.1007/s11517-023-02849-4. [PMID: 37353695 DOI: 10.1007/s11517-023-02849-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 05/17/2023] [Indexed: 06/25/2023]
Abstract
Encoder-decoder-based semantic segmentation models classify image pixels into the corresponding class, such as the ROI (region of interest) or background. In the present study, simple / dilated convolution / series / directed acyclic graph (DAG)-based encoder-decoder semantic segmentation models have been implemented, i.e., SegNet (VGG16), SegNet (VGG19), U-Net, mobileNetv2, ResNet18, ResNet50, Xception and Inception networks for the segment TTUS(Thyroid Tumor Ultrasound) images. Transfer learning has been used to train these segmentation networks using original and despeckled TTUS images. The performance of the networks has been calculated using mIoU and mDC metrics. Based on the exhaustive experiments, it has been observed that ResNet50-based segmentation model obtained the best results objectively with values 0.87 for mIoU, 0.94 for mDC, and also according to radiologist opinion on shape, margin, and echogenicity characteristics of segmented lesions. It is noted that the segmentation model, namely ResNet50, provides better segmentation based on objective and subjective assessment. It may be used in the healthcare system to identify thyroid nodules accurately in real time.
Collapse
Affiliation(s)
- Niranjan Yadav
- Department of Electronics and Communication Engineering, Deenbandhu Chhotu Ram University of Science and Technology Murthal, Sonepat, 131039, India.
| | - Rajeshwar Dass
- Department of Electronics and Communication Engineering, Deenbandhu Chhotu Ram University of Science and Technology Murthal, Sonepat, 131039, India
| | - Jitendra Virmani
- Central Scientific Instruments Organization, Council of Scientific and Industrial Research, Chandigarh, 160030, India
| |
Collapse
|
10
|
Zheng T, Qin H, Cui Y, Wang R, Zhao W, Zhang S, Geng S, Zhao L. Segmentation of thyroid glands and nodules in ultrasound images using the improved U-Net architecture. BMC Med Imaging 2023; 23:56. [PMID: 37060061 PMCID: PMC10105426 DOI: 10.1186/s12880-023-01011-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 04/05/2023] [Indexed: 04/16/2023] Open
Abstract
BACKGROUND Identifying thyroid nodules' boundaries is crucial for making an accurate clinical assessment. However, manual segmentation is time-consuming. This paper utilized U-Net and its improved methods to automatically segment thyroid nodules and glands. METHODS The 5822 ultrasound images used in the experiment came from two centers, 4658 images were used as the training dataset, and 1164 images were used as the independent mixed test dataset finally. Based on U-Net, deformable-pyramid split-attention residual U-Net (DSRU-Net) by introducing ResNeSt block, atrous spatial pyramid pooling, and deformable convolution v3 was proposed. This method combined context information and extracts features of interest better, and had advantages in segmenting nodules and glands of different shapes and sizes. RESULTS DSRU-Net obtained 85.8% mean Intersection over Union, 92.5% mean dice coefficient and 94.1% nodule dice coefficient, which were increased by 1.8%, 1.3% and 1.9% compared with U-Net. CONCLUSIONS Our method is more capable of identifying and segmenting glands and nodules than the original method, as shown by the results of correlational studies.
Collapse
Affiliation(s)
- Tianlei Zheng
- School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, 221116, China
- Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China
| | - Hang Qin
- Department of Medical Equipment Management, Nanjing First Hospital, Nanjing, 221000, China
| | - Yingying Cui
- Department of Pathology, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China
| | - Rong Wang
- Department of Ultrasound Medicine, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China
| | - Weiguo Zhao
- Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China
| | - Shijin Zhang
- Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China
| | - Shi Geng
- Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China
| | - Lei Zhao
- Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China.
| |
Collapse
|
11
|
Classification of breast cancer from histopathology images using an ensemble of deep multiscale networks. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.07.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
12
|
Breast Tumor Ultrasound Image Segmentation Method Based on Improved Residual U-Net Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3905998. [PMID: 35795762 PMCID: PMC9252688 DOI: 10.1155/2022/3905998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 05/19/2022] [Accepted: 05/31/2022] [Indexed: 11/25/2022]
Abstract
In order to achieve efficient and accurate breast tumor recognition and diagnosis, this paper proposes a breast tumor ultrasound image segmentation method based on U-Net framework, combined with residual block and attention mechanism. In this method, the residual block is introduced into U-Net network for improvement to avoid the degradation of model performance caused by the gradient disappearance and reduce the training difficulty of deep network. At the same time, considering the features of spatial and channel attention, a fusion attention mechanism is proposed to be introduced into the image analysis model to improve the ability to obtain the feature information of ultrasound images and realize the accurate recognition and extraction of breast tumors. The experimental results show that the Dice index value of the proposed method can reach 0.921, which shows excellent image segmentation performance.
Collapse
|
13
|
Jarosik P, Klimonda Z, Lewandowski M, Byra M. Breast lesion classification based on ultrasonic radio-frequency signals using convolutional neural networks. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.04.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
14
|
Zhuang Z, Li N, Joseph Raj AN, Mahesh VGV, Qiu S. An RDAU-NET model for lesion segmentation in breast ultrasound images. PLoS One 2019; 14:e0221535. [PMID: 31442268 PMCID: PMC6707567 DOI: 10.1371/journal.pone.0221535] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2019] [Accepted: 08/08/2019] [Indexed: 11/28/2022] Open
Abstract
Breast cancer is a common gynecological disease that poses a great threat to women health due to its high malignant rate. Breast cancer screening tests are used to find any warning signs or symptoms for early detection and currently, Ultrasound screening is the preferred method for breast cancer diagnosis. The localization and segmentation of the lesions in breast ultrasound (BUS) images are helpful for clinical diagnosis of the disease. In this paper, an RDAU-NET (Residual-Dilated-Attention-Gate-UNet) model is proposed and employed to segment the tumors in BUS images. The model is based on the conventional U-Net, but the plain neural units are replaced with residual units to enhance the edge information and overcome the network performance degradation problem associated with deep networks. To increase the receptive field and acquire more characteristic information, dilated convolutions were used to process the feature maps obtained from the encoder stages. The traditional cropping and copying between the encoder-decoder pipelines were replaced by the Attention Gate modules which enhanced the learning capabilities through suppression of background information. The model, when tested with BUS images with benign and malignant tumor presented excellent segmentation results as compared to other Deep Networks. A variety of quantitative indicators including Accuracy, Dice coefficient, AUC(Area-Under-Curve), Precision, Sensitivity, Specificity, Recall, F1score and M-IOU (Mean-Intersection-Over-Union) provided performances above 80%. The experimental results illustrate that the proposed RDAU-NET model can accurately segment breast lesions when compared to other deep learning models and thus has a good prospect for clinical diagnosis.
Collapse
Affiliation(s)
- Zhemin Zhuang
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Nan Li
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Alex Noel Joseph Raj
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Vijayalakshmi G. V. Mahesh
- Department of Electronics and Communication Engineering, BMS Institute of Technology and Management, Bengaluru, Karnataka, India
| | - Shunmin Qiu
- Imaging Department, First Hospital of Medical College of Shantou University, Shantou, Guangdong, China
| |
Collapse
|
15
|
Kriti, Virmani J, Agarwal R. Effect of despeckle filtering on classification of breast tumors using ultrasound images. Biocybern Biomed Eng 2019. [DOI: 10.1016/j.bbe.2019.02.004] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|