1
|
Yadav N, Dass R, Virmani J. Objective assessment of segmentation models for thyroid ultrasound images. J Ultrasound 2023; 26:673-685. [PMID: 36195781 PMCID: PMC10469139 DOI: 10.1007/s40477-022-00726-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 08/27/2022] [Indexed: 11/29/2022] Open
Abstract
Ultrasound features related to thyroid lesions structure, shape, volume, and margins are considered to determine cancer risk. Automatic segmentation of the thyroid lesion would allow the sonographic features to be estimated. On the basis of clinical ultrasonography B-mode scans, a multi-output CNN-based semantic segmentation is used to separate thyroid nodules' cystic & solid components. Semantic segmentation is an automatic technique that labels the ultrasound (US) pixels with an appropriate class or pixel category, i.e., belongs to a lesion or background. In the present study, encoder-decoder-based semantic segmentation models i.e. SegNet using VGG16, UNet, and Hybrid-UNet implemented for segmentation of thyroid US images. For this work, 820 thyroid US images are collected from the DDTI and ultrasoundcases.info (USC) datasets. These segmentation models were trained using a transfer learning approach with original and despeckled thyroid US images. The performance of segmentation models is evaluated by analyzing the overlap region between the true contour lesion marked by the radiologist and the lesion retrieved by the segmentation model. The mean intersection of union (mIoU), mean dice coefficient (mDC) metrics, TPR, TNR, FPR, and FNR metrics are used to measure performance. Based on the exhaustive experiments and performance evaluation parameters it is observed that the proposed Hybrid-UNet segmentation model segments thyroid nodules and cystic components effectively.
Collapse
Affiliation(s)
- Niranjan Yadav
- Department of Electronics and Communication Engineering, Deenbandhu Chhotu Ram University of Science and Technology Murthal, Sonepat, 131039 India
| | - Rajeshwar Dass
- Department of Electronics and Communication Engineering, Deenbandhu Chhotu Ram University of Science and Technology Murthal, Sonepat, 131039 India
| | - Jitendra Virmani
- Central Scientific Instruments Organization, Council of Scientific and Industrial Research, Chandigarh, 160030 India
| |
Collapse
|
2
|
Yu R, Yan S, Gao J, Zhao M, Fu X, Yan Y, Li M, Li X. FBN: Weakly Supervised Thyroid Nodule Segmentation Optimized by Online Foreground and Background. Ultrasound Med Biol 2023:S0301-5629(23)00138-2. [PMID: 37308370 DOI: 10.1016/j.ultrasmedbio.2023.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 04/12/2023] [Accepted: 04/21/2023] [Indexed: 06/14/2023]
Abstract
OBJECTIVE The main objective of the work described here was to train a semantic segmentation model using classification data for thyroid nodule ultrasound images to reduce the pressure of obtaining pixel-level labeled data sets. Furthermore, we improved the segmentation performance of the model by mining the image information to narrow the gap between weakly supervised semantic segmentation (WSSS) and fully supervised semantic segmentation. METHODS Most WSSS methods use a class activation map (CAM) to generate segmentation results. However, the lack of supervision information makes it difficult for a CAM to highlight the object region completely. Therefore, we here propose a novel foreground and background pair (FB-Pair) representation method, which consists of high- and low-response regions highlighted by the original CAM-generated online in the original image. During training, the original CAM is revised using the CAM generated by the FB-Pair. In addition, we design a self-supervised learning pretext task based on FB-Pair, which requires the model to predict whether the pixels in FB-Pair are from the original image during training. After this task, the model will accurately distinguish between different categories of objects. RESULTS Experiments on the thyroid nodule ultrasound image (TUI) data set revealed that our proposed method outperformed existing methods, with a 5.7% improvement in the mean intersection-over-union (mIoU) performance of segmentation compared with the second-best method and a reduction to 2.9% in the difference between the performance of benign and malignant nodules. CONCLUSION Our method trains a well-performing segmentation model on ultrasound images of thyroid nodules using only classification data. In addition, we determined that CAM can take full advantage of the information in the images to highlight the target regions more accurately and thus improve the segmentation performance.
Collapse
Affiliation(s)
- Ruiguo Yu
- College of Intelligence and Computing, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Advanced Networking, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Shaoqi Yan
- College of Intelligence and Computing, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Advanced Networking, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Jie Gao
- College of Intelligence and Computing, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Advanced Networking, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Mankun Zhao
- College of Intelligence and Computing, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Advanced Networking, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Xuzhou Fu
- College of Intelligence and Computing, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Advanced Networking, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Yang Yan
- Tianjin Medical University General Hospital, Tianjin Medical University, Tianjin, China
| | - Ming Li
- Tianjin Medical University General Hospital, Tianjin Medical University, Tianjin, China
| | - Xuewei Li
- College of Intelligence and Computing, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Advanced Networking, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China.
| |
Collapse
|
3
|
Zheng T, Qin H, Cui Y, Wang R, Zhao W, Zhang S, Geng S, Zhao L. Segmentation of thyroid glands and nodules in ultrasound images using the improved U-Net architecture. BMC Med Imaging 2023; 23:56. [PMID: 37060061 PMCID: PMC10105426 DOI: 10.1186/s12880-023-01011-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 04/05/2023] [Indexed: 04/16/2023] Open
Abstract
BACKGROUND Identifying thyroid nodules' boundaries is crucial for making an accurate clinical assessment. However, manual segmentation is time-consuming. This paper utilized U-Net and its improved methods to automatically segment thyroid nodules and glands. METHODS The 5822 ultrasound images used in the experiment came from two centers, 4658 images were used as the training dataset, and 1164 images were used as the independent mixed test dataset finally. Based on U-Net, deformable-pyramid split-attention residual U-Net (DSRU-Net) by introducing ResNeSt block, atrous spatial pyramid pooling, and deformable convolution v3 was proposed. This method combined context information and extracts features of interest better, and had advantages in segmenting nodules and glands of different shapes and sizes. RESULTS DSRU-Net obtained 85.8% mean Intersection over Union, 92.5% mean dice coefficient and 94.1% nodule dice coefficient, which were increased by 1.8%, 1.3% and 1.9% compared with U-Net. CONCLUSIONS Our method is more capable of identifying and segmenting glands and nodules than the original method, as shown by the results of correlational studies.
Collapse
Affiliation(s)
- Tianlei Zheng
- School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, 221116, China
- Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China
| | - Hang Qin
- Department of Medical Equipment Management, Nanjing First Hospital, Nanjing, 221000, China
| | - Yingying Cui
- Department of Pathology, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China
| | - Rong Wang
- Department of Ultrasound Medicine, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China
| | - Weiguo Zhao
- Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China
| | - Shijin Zhang
- Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China
| | - Shi Geng
- Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China
| | - Lei Zhao
- Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China.
| |
Collapse
|
4
|
Lin X, Zhou X, Tong T, Nie X, Wang L, Zheng H, Li J, Xue E, Chen S, Zheng M, Chen C, Jiang H, Du M, Gao Q. A Super-resolution Guided Network for Improving Automated Thyroid Nodule Segmentation. Comput Methods Programs Biomed 2022; 227:107186. [PMID: 36334526 DOI: 10.1016/j.cmpb.2022.107186] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 10/03/2022] [Accepted: 10/15/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE A thyroid nodule is an abnormal lump that grows in the thyroid gland, which is the early symptom of thyroid cancer. In order to diagnose and treat thyroid cancer at the earliest stage, it is desired to characterize the nodule accurately. Ultrasound thyroid nodules segmentation is a challenging task due to the speckle noise, intensity heterogeneity, low contrast and low resolution. In this paper, we propose a novel framework to improve the accuracy of thyroid nodules segmentation. METHODS Different from previous work, a super-resolution reconstruction network is firstly constructed to upscale the resolution of the input ultrasound image. After that, our proposed N-shape network is utilized to perform the segmentation task. The guidance of super-resolution reconstruction network can make the high-frequency information of the input thyroid ultrasound image richer and more comprehensive than the original image. Our N-shape network consists of several atrous spatial pyramid pooling blocks, a multi-scale input layer, a U-shape convolutional network with attention blocks and a proposed parallel atrous convolution(PAC) module. These modules are conducive to capture context information at multiple scales so that semantic features can be fully utilized for lesion segmentation. Especially, our proposed PAC module is beneficial to further improve the segmentation by extracting high-level semantic features from different receptive fields. We use the UTNI-2021 dataset for model training, validating and testing. RESULTS The experimental results show that our proposed method achieve a Dice value of 91.9%, a mIoU value of 87.0%, a Precision value of 88.0%, a Recall value 83.7% and a F1-score value of 84.3%, which outperforms most state-of-the-art methods. CONCLUSIONS Our method achieves the best performance on the UTNI-2021 dataset and provides a new way of ultrasound image segmentation. We believe that our method can provide doctors with reliable auxiliary diagnosis information in clinical practice.
Collapse
Affiliation(s)
- Xingtao Lin
- College of Physics and Information Engineering, Fuzhou University; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University
| | - Xiaogen Zhou
- College of Physics and Information Engineering, Fuzhou University; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University
| | - Tong Tong
- College of Physics and Information Engineering, Fuzhou University; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University; Imperial Vision Technology.
| | - Xingqing Nie
- College of Physics and Information Engineering, Fuzhou University; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University
| | - Luoyan Wang
- College of Physics and Information Engineering, Fuzhou University; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University
| | - Haonan Zheng
- College of Physics and Information Engineering, Fuzhou University; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University
| | - Jing Li
- College of Physics and Information Engineering, Fuzhou University; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University
| | | | - Shun Chen
- Fujian Medical University Union Hospital.
| | | | - Cong Chen
- Fujian Medical University Union Hospital
| | - Haiyan Jiang
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University
| | - Min Du
- College of Physics and Information Engineering, Fuzhou University; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University
| | - Qinquan Gao
- College of Physics and Information Engineering, Fuzhou University; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University; Imperial Vision Technology
| |
Collapse
|
5
|
Zhu PS, Zhang YR, Ren JY, Li QL, Chen M, Sang T, Li WX, Li J, Cui XW. Ultrasound-based deep learning using the VGGNet model for the differentiation of benign and malignant thyroid nodules: A meta-analysis. Front Oncol 2022; 12:944859. [PMID: 36249056 PMCID: PMC9554631 DOI: 10.3389/fonc.2022.944859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Accepted: 08/19/2022] [Indexed: 12/13/2022] Open
Abstract
Objective The aim of this study was to evaluate the accuracy of deep learning using the convolutional neural network VGGNet model in distinguishing benign and malignant thyroid nodules based on ultrasound images. Methods Relevant studies were selected from PubMed, Embase, Cochrane Library, China National Knowledge Infrastructure (CNKI), and Wanfang databases, which used the deep learning-related convolutional neural network VGGNet model to classify benign and malignant thyroid nodules based on ultrasound images. Cytology and pathology were used as gold standards. Furthermore, reported eligibility and risk bias were assessed using the QUADAS-2 tool, and the diagnostic accuracy of deep learning VGGNet was analyzed with pooled sensitivity, pooled specificity, diagnostic odds ratio, and the area under the curve. Results A total of 11 studies were included in this meta-analysis. The overall estimates of sensitivity and specificity were 0.87 [95% CI (0.83, 0.91)] and 0.85 [95% CI (0.79, 0.90)], respectively. The diagnostic odds ratio was 38.79 [95% CI (22.49, 66.91)]. The area under the curve was 0.93 [95% CI (0.90, 0.95)]. No obvious publication bias was found. Conclusion Deep learning using the convolutional neural network VGGNet model based on ultrasound images performed good diagnostic efficacy in distinguishing benign and malignant thyroid nodules. Systematic Review Registration https://www.crd.york.ac.nk/prospero, identifier CRD42022336701.
Collapse
Affiliation(s)
- Pei-Shan Zhu
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Yu-Rui Zhang
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Jia-Yu Ren
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qiao-Li Li
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Ming Chen
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Tian Sang
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Wen-Xiao Li
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Jun Li
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China,NHC Key Laboratory of Prevention and Treatment of Central Asia High Incidence Diseases, First Affiliated Hospital, School of Medicine, Shihezi University, Shihezi, China,*Correspondence: Jun Li, ; Xin-Wu Cui,
| | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China,*Correspondence: Jun Li, ; Xin-Wu Cui,
| |
Collapse
|
6
|
Wildman-Tobriner B, Taghi-Zadeh E, Mazurowski MA. Artificial Intelligence (AI) Tools for Thyroid Nodules on Ultrasound, From the AJR Special Series on AI Applications. AJR Am J Roentgenol 2022. [PMID: 35383487 DOI: 10.2214/AJR.22.27430] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Artificial intelligence (AI) methods for evaluating thyroid nodules on ultrasound have been widely described in the literature, with reported performance of AI tools matching or in some instances surpassing radiologists. As these data have accumulated, products for classification and risk stratification of thyroid nodules on ultrasound have become commercially available. This article reviews FDA-approved products currently on the market, with a focus on product features, reported performance, and considerations for implementation. The products perform risk stratification primarily using the Thyroid Imaging Reporting and Data System (TI-RADS), though may provide additional prediction tools independent of TI-RADS. Key issues in implementation include integration with radiologist interpretation, impact on workflow and efficiency, and performance monitoring. AI applications beyond nodule classification, including report construction and incidental findings follow-up, are also described. Anticipated future directions of research and development in AI tools for thyroid nodules are highlighted.
Collapse
|
7
|
Sun J, Li C, Lu Z, He M, Zhao T, Li X, Gao L, Xie K, Lin T, Sui J, Xi Q, Zhang F, Ni X. TNSNet: Thyroid nodule segmentation in ultrasound imaging using soft shape supervision. Comput Methods Programs Biomed 2022; 215:106600. [PMID: 34971855 DOI: 10.1016/j.cmpb.2021.106600] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 09/30/2021] [Accepted: 12/20/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVES Thyroid nodules are a common disorder of the endocrine system. Segmentation of thyroid nodules on ultrasound images is an important step in the evaluation and diagnosis of nodules and an initial step in computer-aided diagnostic systems. The accuracy and consistency of segmentation remain a challenge due to the low contrast, speckle noise, and low resolution of ultrasound images. Therefore, the study of deep learning-based algorithms for thyroid nodule segmentation is important. This study utilizes soft shape supervision to improve the performance of detection and segmentation of boundaries of nodules. Soft shape supervision can emphasize the boundary features and assist the network in segmenting nodules accurately. METHODS We propose a dual-path convolution neural network, including region and shape paths, which use DeepLabV3+ as the backbone. Soft shape supervision blocks are inserted between the two paths to implement cross-path attention mechanisms. The blocks enhance the representation of shape features and add them to the region path as auxiliary information. Thus, the network can accurately detect and segment thyroid nodules. RESULTS We collect 3786 ultrasound images of thyroid nodules to train and test our network. Compared with the ground truth, the test results achieve an accuracy of 95.81% and a DSC of 85.33. The visualization results also suggest that the network has learned clear and accurate boundaries of the nodules. The evaluation metrics and visualization results demonstrate the superior segmentation performance of the network to other classical deep learning-based networks. CONCLUSIONS The proposed dual-path network can accurately realize automatic segmentation of thyroid nodules on ultrasound images. It can also be used as an initial step in computer-aided diagnosis. It shows superior performance to other classical methods and demonstrates the potential for accurate segmentation of nodules in clinical applications.
Collapse
Affiliation(s)
- Jiawei Sun
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Chunying Li
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Zhengda Lu
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Mu He
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Tong Zhao
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China
| | - Xiaoqin Li
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China
| | - Liugang Gao
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Kai Xie
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Tao Lin
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Jianfeng Sui
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Qianyi Xi
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Fan Zhang
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Xinye Ni
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China.
| |
Collapse
|
8
|
Layek K, Basak B, Samanta S, Maity SP, Barui A. Stiffness prediction on elastography images and neuro-fuzzy based segmentation for thyroid cancer detection. Appl Opt 2022; 61:49-59. [PMID: 35200805 DOI: 10.1364/ao.445226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 11/29/2021] [Indexed: 06/14/2023]
Abstract
The elastography method detects metastatic changes by measuring the stiffness of tissues. Estimation of elasticities from elastography images facilitates more precise identification of the metastatic region and detection of the same. In this study, an automated segmentation algorithm is proposed that calculates pixel-wise elasticity values to detect thyroid cancer from elastography images. This intensity to elasticity conversion is achieved by constructing a fuzzy inference system using an adaptive neuro-fuzzy inference system supported by two meta-heuristic algorithms: genetic algorithm and particle swarm optimization. Pixels of the input color images (red, green, and blue) are replaced by equivalent elasticity values (in kilo Pascal) and are stored in a two-dimensional array to form an "elasticity matrix." The elasticity matrix is then segmented into three regions, namely, suspicious, near-suspicious, and non-suspicious, based on the elasticity measures, where the threshold limits are calculated using the fuzzy entropy maximization method optimized by the differential evolution algorithm. Segmentation performances are evaluated by Kappa and the dice similarity co-efficient, and average values achieved are 0.94±0.11 and 0.93±0.12, respectively. Sensitivity and specificity values achieved by the proposed method are 86.35±0.34% and 97.67±0.40%, respectively, showing an overall accuracy of 93.50±0.42%. Results justify the importance of pixel stiffness for segmentation of thyroid nodules in elastography images.
Collapse
|
9
|
Fouad M, El Ghany MAA, Huebner M, Schmitz G. A Deep Learning Signal-Based Approach to Fast Harmonic Imaging. 2021 IEEE International Ultrasonics Symposium (IUS) 2021. [DOI: 10.1109/ius52206.2021.9593348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
10
|
Zhu X, Ying J, Yang H, Fu L, Li B, Jiang B. Detection of deep myometrial invasion in endometrial cancer MR imaging based on multi-feature fusion and probabilistic support vector machine ensemble. Comput Biol Med 2021; 134:104487. [PMID: 34022489 DOI: 10.1016/j.compbiomed.2021.104487] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 04/25/2021] [Accepted: 05/07/2021] [Indexed: 11/21/2022]
Abstract
The depth of myometrial invasion affects the treatment and prognosis of patients with endometrial cancer (EC), conventionally evaluated using MR imaging (MRI). However, only a few computer-aided diagnosis methods have been reported for identifying deep myometrial invasion (DMI) using MRI. Moreover, these existing methods exhibit relatively unsatisfactory sensitivity and specificity. This study proposes a novel computerized method to facilitate the accurate detection of DMI on MRI. This method requires only the corpus uteri region provided by humans or computers instead of the tumor region. We also propose a geometric feature called LS to describe the irregularity of the tissue structure inside the corpus uteri triggered by EC, which has not been leveraged for the DMI prediction model in other studies. Texture features are extracted and then automatically selected by recursive feature elimination. Utilizing a feature fusion strategy of strong and weak features devised in this study, multiple probabilistic support vector machines incorporate LS and texture features, which are then merged to form the ensemble model EPSVM. The model performance is evaluated via leave-one-out cross-validation. We make the following comparisons, EPSVM versus the commonly used classifiers such as random forest, logistic regression, and naive Bayes; EPSVM versus the models using LS or texture features alone. The results show that EPSVM attains an accuracy, sensitivity, specificity, and F1 score of 93.7%, 94.7%, 93.3%, and 87.8%, all of which are higher than those of the commonly used classifiers and the models using LS or texture features alone. Compared with the methods in existing studies, EPSVM exhibits high performance in terms of both sensitivity and specificity. Moreover, LS can achieve an accuracy, sensitivity, and specificity of 89.9%, 89.5%, and 90.0%. Thus, the devised geometric feature LS is significant for DMI detection. The fusion of LS and texture features in the proposed EPSVM can provide more reliable prediction. The computer-aided classification based on the proposed method can assist radiologists in accurately identifying DMI on MRI.
Collapse
|
11
|
Abstract
Ultrasound (US), a flexible green imaging modality, is expanding globally as a first-line imaging technique in various clinical fields following with the continual emergence of advanced ultrasonic technologies and the well-established US-based digital health system. Actually, in US practice, qualified physicians should manually collect and visually evaluate images for the detection, identification and monitoring of diseases. The diagnostic performance is inevitably reduced due to the intrinsic property of high operator-dependence from US. In contrast, artificial intelligence (AI) excels at automatically recognizing complex patterns and providing quantitative assessment for imaging data, showing high potential to assist physicians in acquiring more accurate and reproducible results. In this article, we will provide a general understanding of AI, machine learning (ML) and deep learning (DL) technologies; We then review the rapidly growing applications of AI-especially DL technology in the field of US-based on the following anatomical regions: thyroid, breast, abdomen and pelvis, obstetrics heart and blood vessels, musculoskeletal system and other organs by covering image quality control, anatomy localization, object detection, lesion segmentation, and computer-aided diagnosis and prognosis evaluation; Finally, we offer our perspective on the challenges and opportunities for the clinical practice of biomedical AI systems in US.
Collapse
Affiliation(s)
- Yu-Ting Shen
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China
| | - Liang Chen
- Department of Gastroenterology, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, 200072, PR China
| | - Wen-Wen Yue
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| | - Hui-Xiong Xu
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| |
Collapse
|
12
|
Sharifi Y, Bakhshali MA, Dehghani T, Danaiashgzari M, Sargolzaei M, Eslami S. Deep learning on ultrasound images of thyroid nodules. Biocybern Biomed Eng 2021; 41:636-55. [DOI: 10.1016/j.bbe.2021.02.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
13
|
Han M, Ha EJ, Park JH. Computer-Aided Diagnostic System for Thyroid Nodules on Ultrasonography: Diagnostic Performance Based on the Thyroid Imaging Reporting and Data System Classification and Dichotomous Outcomes. AJNR Am J Neuroradiol 2020; 42:559-565. [PMID: 33361374 DOI: 10.3174/ajnr.a6922] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 09/29/2020] [Indexed: 01/19/2023]
Abstract
BACKGROUND AND PURPOSE Artificial intelligence-based computer-aided diagnostic systems have been introduced for thyroid cancer diagnosis. Our aim was to compare the diagnostic performance of a commercially available computer-aided diagnostic system and radiologist-based assessment for the detection of thyroid cancer based on the Thyroid Imaging Reporting and Data Systems (TIRADS) and dichotomous outcomes. MATERIALS AND METHODS In total, 372 consecutive patients with 454 thyroid nodules were enrolled. The computer-aided diagnostic system was set up to render a possible diagnosis in 2 formats, the Korean Society of Thyroid Radiology (K)-TIRADS and the American Thyroid Association (ATA)-TIRADS-classifications, and dichotomous outcomes (possibly benign or possibly malignant). RESULTS The diagnostic sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of the computer-aided diagnostic system for thyroid cancer were, respectively, 97.6%, 21.6%, 42.0%, 93.9%, and 49.6% for K-TIRADS; 94.6%, 29.6%, 43.9%, 90.4%, and 53.5% for ATA-TIRADS; and 81.4%, 81.9%, 72.3%, 88.3%, and 81.7% for dichotomous outcomes. The sensitivities of the computer-aided diagnostic system did not differ significantly from those of the radiologist (all P > .05); the specificities and accuracies were significantly lower than those of the radiologist (all P < .001). Unnecessary fine-needle aspiration rates were lower for the dichotomous outcome characterizations, particularly for those performed by the radiologist. The interobserver agreement for the description of K-TIRADS and ATA-TIRADS classifications was fair-to-moderate, but the dichotomous outcomes were in substantial agreement. CONCLUSIONS The diagnostic performance of the computer-aided diagnostic system varies in terms of TIRADS classification and dichotomous outcomes and relative to radiologist-based assessments. Clinicians should know about the strengths and weaknesses associated with the diagnosis of thyroid cancer using computer-aided diagnostic systems.
Collapse
Affiliation(s)
- M Han
- Department of Radiology, Ajou University School of Medicine, Suwon, Korea
| | - E J Ha
- Department of Radiology, Ajou University School of Medicine, Suwon, Korea
| | - J H Park
- Department of Radiology, Ajou University School of Medicine, Suwon, Korea
| |
Collapse
|
14
|
Wang B, Perronne L, Burke C, Adler RS. Artificial Intelligence for Classification of Soft-Tissue Masses at US. Radiol Artif Intell 2020; 3:e200125. [PMID: 33937855 DOI: 10.1148/ryai.2020200125] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 10/05/2020] [Accepted: 10/28/2020] [Indexed: 12/17/2022]
Abstract
Purpose To train convolutional neural network (CNN) models to classify benign and malignant soft-tissue masses at US and to differentiate three commonly observed benign masses. Materials and Methods In this retrospective study, US images obtained between May 2010 and June 2019 from 419 patients (mean age, 52 years ± 18 [standard deviation]; 250 women) with histologic diagnosis confirmed at biopsy or surgical excision (n = 227) or masses that demonstrated imaging characteristics of lipoma, benign peripheral nerve sheath tumor, and vascular malformation (n = 192) were included. Images in patients with a histologic diagnosis (n = 227) were used to train and evaluate a CNN model to distinguish malignant and benign lesions. Twenty percent of cases were withheld as a test dataset, and the remaining cases were used to train the model with a 75%-25% training-validation split and fourfold cross-validation. Performance of the model was compared with retrospective interpretation of the same dataset by two experienced musculoskeletal radiologists, blinded to clinical history. A second group of US images from 275 of the 419 patients containing the three common benign masses was used to train and evaluate a separate model to differentiate between the masses. The models were trained on the Keras machine learning platform (version 2.3.1), with a modified pretrained VGG16 network. Performance metrics of the model and of the radiologists were compared by using the McNemar test, and 95% CIs for performance metrics were estimated by using the Clopper-Pearson method (accuracy, recall, specificity, and precision) and the DeLong method (area under the receiver operating characteristic curve). Results The model trained to classify malignant and benign masses demonstrated an accuracy of 79% (95% CI: 68, 88) on the test data, with an area under the receiver operating characteristic curve of 0.91 (95% CI: 0.84, 0.98), matching the performance of two expert readers. Performance of the model distinguishing three benign masses was lower, with an accuracy of 71% (95% CI: 61, 80) on the test data. Conclusion The trained CNN was capable of differentiating between benign and malignant soft-tissue masses depicted on US images, with performance matching that of two experienced musculoskeletal radiologists.© RSNA, 2020.
Collapse
Affiliation(s)
- Benjamin Wang
- Department of Radiology, Division of Musculoskeletal Radiology, NYU Langone Health, 301 E 17th St, 6th Floor, New York, NY, 10003 (B.W., C.B., R.S.A.); and Department of Musculoskeletal Imaging, Hôpital Lariboisière, Paris, France (L.P.)
| | - Laetitia Perronne
- Department of Radiology, Division of Musculoskeletal Radiology, NYU Langone Health, 301 E 17th St, 6th Floor, New York, NY, 10003 (B.W., C.B., R.S.A.); and Department of Musculoskeletal Imaging, Hôpital Lariboisière, Paris, France (L.P.)
| | - Christopher Burke
- Department of Radiology, Division of Musculoskeletal Radiology, NYU Langone Health, 301 E 17th St, 6th Floor, New York, NY, 10003 (B.W., C.B., R.S.A.); and Department of Musculoskeletal Imaging, Hôpital Lariboisière, Paris, France (L.P.)
| | - Ronald S Adler
- Department of Radiology, Division of Musculoskeletal Radiology, NYU Langone Health, 301 E 17th St, 6th Floor, New York, NY, 10003 (B.W., C.B., R.S.A.); and Department of Musculoskeletal Imaging, Hôpital Lariboisière, Paris, France (L.P.)
| |
Collapse
|
15
|
Park VY, Lee E, Lee HS, Kim HJ, Yoon J, Son J, Song K, Moon HJ, Yoon JH, Kim GR, Kwak JY. Combining radiomics with ultrasound-based risk stratification systems for thyroid nodules: an approach for improving performance. Eur Radiol 2020; 31:2405-2413. [PMID: 33034748 DOI: 10.1007/s00330-020-07365-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 08/30/2020] [Accepted: 10/01/2020] [Indexed: 10/23/2022]
Abstract
OBJECTIVES To develop a radiomics score using ultrasound images to predict thyroid malignancy and to investigate its potential as a complementary tool to improve the performance of risk stratification systems. METHODS We retrospectively included consecutive patients who underwent fine-needle aspiration (FNA) for thyroid nodules that were cytopathologically diagnosed as benign or malignant. Nodules were randomly assigned to a training and test set (8:2 ratio). A radiomics score was developed from the training set, and cutoff values based on the maximum Youden index (Rad_maxY) and for 5%, 10%, and 20% predicted malignancy risk (Rad_5%, Rad_10%, Rad_20%, respectively) were applied to the test set. The performances of the American College of Radiology (ACR) and the American Thyroid Association (ATA) guidelines were compared with the combined performances of the guidelines and radiomics score with interpretations from expert and nonexpert readers. RESULTS A total of 1624 thyroid nodules from 1609 patients (mean age, 50.1 years [range, 18-90 years]) were included. The radiomics score yielded an AUC of 0.85 (95% CI: 0.83, 0.87) in the training set and 0.75 (95% CI: 0.69, 0.81) in the test set (Rad_maxY). When the radiomics score was combined with the ACR or ATA guidelines (Rad_5%), all readers showed increased specificity, accuracy, and PPV and decreased unnecessary FNA rates (all p < .05), with no difference in sensitivity (p > .05). CONCLUSION Radiomics help predict thyroid malignancy and improve specificity, accuracy, PPV, and unnecessary FNA rate while maintaining the sensitivity of the ACR and ATA guidelines for both expert and nonexpert readers. KEY POINTS • The radiomics score yielded an AUC of 0.85 and 0.75 in the training and test set, respectively. • For all readers, combining a 5% predicted malignancy risk cutoff for the radiomics score with the ACR and ATA guidelines significantly increased specificity, accuracy, and PPV and decreased unnecessary FNA rates, with no decrease in sensitivity. • Radiomics can help predict malignancy in thyroid nodules in combination with risk stratification systems, by improving specificity, accuracy, and PPV and unnecessary FNA rates while maintaining sensitivity for both expert and nonexpert readers.
Collapse
Affiliation(s)
- Vivian Y Park
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Yonsei University, College of Medicine, 50 Yonsei-ro, Seodaemun-gu, 03722, Seoul, Korea
| | - Eunjung Lee
- Department of Computational Science and Engineering, Yonsei University, Seoul, Korea
| | - Hye Sun Lee
- Biostatistics Collaboration Unit, Yonsei University, College of Medicine, Seoul, Korea
| | - Hye Jung Kim
- Department of Radiology, Kyungpook National University Chilgok Hospital, School of Medicine, Kyungpook National University, Daegu, Korea
| | - Jiyoung Yoon
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Yonsei University, College of Medicine, 50 Yonsei-ro, Seodaemun-gu, 03722, Seoul, Korea
| | - Jinwoo Son
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Yonsei University, College of Medicine, 50 Yonsei-ro, Seodaemun-gu, 03722, Seoul, Korea
| | - Kijun Song
- Department of Biostatistics, Yonsei University, College of Nursing, Seoul, Korea
| | - Hee Jung Moon
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Yonsei University, College of Medicine, 50 Yonsei-ro, Seodaemun-gu, 03722, Seoul, Korea
| | - Jung Hyun Yoon
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Yonsei University, College of Medicine, 50 Yonsei-ro, Seodaemun-gu, 03722, Seoul, Korea
| | - Ga Ram Kim
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Yonsei University, College of Medicine, 50 Yonsei-ro, Seodaemun-gu, 03722, Seoul, Korea
| | - Jin Young Kwak
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Yonsei University, College of Medicine, 50 Yonsei-ro, Seodaemun-gu, 03722, Seoul, Korea.
| |
Collapse
|
16
|
Yi J, Kang HK, Kwon JH, Kim KS, Park MH, Seong YK, Kim DW, Ahn B, Ha K, Lee J, Hah Z, Bang WC. Technology trends and applications of deep learning in ultrasonography: image quality enhancement, diagnostic support, and improving workflow efficiency. Ultrasonography 2020; 40:7-22. [PMID: 33152846 PMCID: PMC7758107 DOI: 10.14366/usg.20102] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Accepted: 09/14/2020] [Indexed: 12/12/2022] Open
Abstract
In this review of the most recent applications of deep learning to ultrasound imaging, the architectures of deep learning networks are briefly explained for the medical imaging applications of classification, detection, segmentation, and generation. Ultrasonography applications for image processing and diagnosis are then reviewed and summarized, along with some representative imaging studies of the breast, thyroid, heart, kidney, liver, and fetal head. Efforts towards workflow enhancement are also reviewed, with an emphasis on view recognition, scanning guide, image quality assessment, and quantification and measurement. Finally some future prospects are presented regarding image quality enhancement, diagnostic support, and improvements in workflow efficiency, along with remarks on hurdles, benefits, and necessary collaborations.
Collapse
Affiliation(s)
- Jonghyon Yi
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Ho Kyung Kang
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Jae-Hyun Kwon
- DR Imaging R&D Lab, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Kang-Sik Kim
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Moon Ho Park
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Yeong Kyeong Seong
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Dong Woo Kim
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Byungeun Ahn
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Kilsu Ha
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Jinyong Lee
- System R&D Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Zaegyoo Hah
- System R&D Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Won-Chul Bang
- Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seoul, Korea.,Product Strategy Team, Samsung Medison Co., Ltd., Seoul, Korea
| |
Collapse
|