51
|
Awasthi N, Vermeer L, Fixsen LS, Lopata RGP, Pluim JPW. LVNet: Lightweight Model for Left Ventricle Segmentation for Short Axis Views in Echocardiographic Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:2115-2128. [PMID: 35452387 DOI: 10.1109/tuffc.2022.3169684] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Lightweight segmentation models are becoming more popular for fast diagnosis on small and low cost medical imaging devices. This study focuses on the segmentation of the left ventricle (LV) in cardiac ultrasound (US) images. A new lightweight model [LV network (LVNet)] is proposed for segmentation, which gives the benefits of requiring fewer parameters but with improved segmentation performance in terms of Dice score (DS). The proposed model is compared with state-of-the-art methods, such as UNet, MiniNetV2, and fully convolutional dense dilated network (FCdDN). The model proposed comes with a post-processing pipeline that further enhances the segmentation results. In general, the training is done directly using the segmentation mask as the output and the US image as the input of the model. A new strategy for segmentation is also introduced in addition to the direct training method used. Compared with the UNet model, an improvement in DS performance as high as 5% for segmentation with papillary (WP) muscles was found, while showcasing an improvement of 18.5% when the papillary muscles are excluded. The model proposed requires only 5% of the memory required by a UNet model. LVNet achieves a better trade-off between the number of parameters and its segmentation performance as compared with other conventional models. The developed codes are available at https://github.com/navchetanawasthi/Left_Ventricle_Segmentation.
Collapse
|
52
|
Sunanda Biradar, Akkasaligar PT, Biradar S. Feature Extraction and Classification of Digital Kidney Ultrasound Images: A Hybrid Approach. PATTERN RECOGNITION AND IMAGE ANALYSIS 2022. [DOI: 10.1134/s1054661822020043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
53
|
Huang R, Lin M, Dou H, Lin Z, Ying Q, Jia X, Xu W, Mei Z, Yang X, Dong Y, Zhou J, Ni D. Boundary-rendering Network for Breast Lesion Segmentation in Ultrasound Images. Med Image Anal 2022; 80:102478. [DOI: 10.1016/j.media.2022.102478] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 04/01/2022] [Accepted: 05/10/2022] [Indexed: 11/30/2022]
|
54
|
Chen G, Yin J, Dai Y, Zhang J, Yin X, Cui L. A novel convolutional neural network for kidney ultrasound images segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 218:106712. [PMID: 35248816 DOI: 10.1016/j.cmpb.2022.106712] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 01/27/2022] [Accepted: 02/22/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Ultrasound imaging has been widely used in the screening of kidney diseases. The localization and segmentation of the kidneys in ultrasound images are helpful for the clinical diagnosis of diseases. However, it is a challenging task to segment the kidney accurately from ultrasound images due to the interference of various factors. METHODS In this paper, a novel multi-scale and deep-supervised CNN architecture is proposed to segment the kidney. The architecture consists of an encoder, a pyramid pooling module and a decoder. In the encoder, we design a multi-scale input pyramid with parallel branches to capture features at different scales. In the decoder, a multi-output supervision module is developed. The introduction of the multi-output supervision module enables the network to learn to predict more precise segmentation results scale-by-scale. In addition, we construct a kidney ultrasound dataset, which contains of 400 images and 400 labels. RESULTS To highlight effectiveness of the proposed approach, we use six quantitative indicators to compare with several state-of-the-art methods on the same kidney ultrasound dataset. The results of our method on the six indicators of accuracy, dice, jaccard, precision, recall and ASSD are 98.86%, 95.86%, 92.18%, 96.38%, 95.47% and 0.3510, respectively. CONCLUSIONS The analysis of evaluation indicators and segmentation results shows that our method achieves the best performance in kidney ultrasound image segmentation.
Collapse
Affiliation(s)
- Gongping Chen
- The Institute of Robotics and Automatic Information System, Tianjin Key Laboratory of Intelligent Robotics, College of Artificial Intelligence, Nankai University, Tianjin 300350, China.
| | - Jingjing Yin
- The Institute of Robotics and Automatic Information System, Tianjin Key Laboratory of Intelligent Robotics, College of Artificial Intelligence, Nankai University, Tianjin 300350, China
| | - Yu Dai
- The Institute of Robotics and Automatic Information System, Tianjin Key Laboratory of Intelligent Robotics, College of Artificial Intelligence, Nankai University, Tianjin 300350, China.
| | - Jianxun Zhang
- The Institute of Robotics and Automatic Information System, Tianjin Key Laboratory of Intelligent Robotics, College of Artificial Intelligence, Nankai University, Tianjin 300350, China
| | - Xiaotao Yin
- Department of Urology, Fourth Medical Center of Chinese PLA General Hospital, Beijing 10048, China
| | - Liang Cui
- Department of Urology, Civil Aviation General Hospital, Beijing 100123, China
| |
Collapse
|
55
|
Inan MSK, Alam FI, Hasan R. Deep integrated pipeline of segmentation guided classification of breast cancer from ultrasound images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103553] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
56
|
Song Y, Zheng J, Lei L, Ni Z, Zhao B, Hu Y. CT2US: Cross-modal transfer learning for kidney segmentation in ultrasound images with synthesized data. ULTRASONICS 2022; 122:106706. [PMID: 35149255 DOI: 10.1016/j.ultras.2022.106706] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 01/16/2022] [Accepted: 02/04/2022] [Indexed: 06/14/2023]
Abstract
Accurate segmentation of kidney in ultrasound images is a vital procedure in clinical diagnosis and interventional operation. In recent years, deep learning technology has demonstrated promising prospects in medical image analysis. However, due to the inherent problems of ultrasound images, data with annotations are scarce and arduous to acquire, hampering the application of data-hungry deep learning methods. In this paper, we propose cross-modal transfer learning from computerized tomography (CT) to ultrasound (US) by leveraging annotated data in the CT modality. In particular, we adopt cycle generative adversarial network (CycleGAN) to synthesize US images from CT data and construct a transition dataset to mitigate the immense domain discrepancy between US and CT. Mainstream convolutional neural networks such as U-Net, U-Res, PSPNet, and DeepLab v3+ are pretrained on the transition dataset and then transferred to real US images. We first trained CNN models on a data set composed of 50 ultrasound images and validated them on a validation set composed of 30 ultrasound images. In addition, we selected 82 ultrasound images from another hospital to construct a cross-site data set to verify the generalization performance of the models. The experimental results show that with our proposed transfer learning strategy, the segmentation accuracy in dice similarity coefficient (DSC) reaches 0.853 for U-Net, 0.850 for U-Res, 0.826 for PSPNet and 0.827 for DeepLab v3+ on the cross-site test set. Compared with training from scratch, the accuracy improvement was 0.127, 0.097, 0.105 and 0.036 respectively. Our transfer learning strategy effectively improves the accuracy and generalization ability of ultrasound image segmentation model with limited training data.
Collapse
Affiliation(s)
- Yuxin Song
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China and University of Chinese Academy of Sciences, Beijing 100039, China.
| | - Jing Zheng
- Department of Ultrasound, First Affiliated Hospital of Southern University of Science and Technology, Second Clinical College of Jinan University, Shenzhen People's Hospital, Shenzhen, 518020, China.
| | - Long Lei
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.
| | - Zhipeng Ni
- Department of Ultrasound, First Affiliated Hospital of Southern University of Science and Technology, Second Clinical College of Jinan University, Shenzhen People's Hospital, Shenzhen, 518020, China.
| | - Baoliang Zhao
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Pazhou Lab, Guangzhou 510320, China.
| | - Ying Hu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Pazhou Lab, Guangzhou 510320, China.
| |
Collapse
|
57
|
Wang Q, Chen H, Luo G, Li B, Shang H, Shao H, Sun S, Wang Z, Wang K, Cheng W. Performance of novel deep learning network with the incorporation of the automatic segmentation network for diagnosis of breast cancer in automated breast ultrasound. Eur Radiol 2022; 32:7163-7172. [PMID: 35488916 DOI: 10.1007/s00330-022-08836-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 04/15/2022] [Accepted: 04/21/2022] [Indexed: 11/29/2022]
Abstract
OBJECTIVE To develop novel deep learning network (DLN) with the incorporation of the automatic segmentation network (ASN) for morphological analysis and determined the performance for diagnosis breast cancer in automated breast ultrasound (ABUS). METHODS A total of 769 breast tumors were enrolled in this study and were randomly divided into training set and test set at 600 vs. 169. The novel DLNs (Resent v2, ResNet50 v2, ResNet101 v2) added a new ASN to the traditional ResNet networks and extracted morphological information of breast tumors. The accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), area under the receiver operating characteristic (ROC) curve (AUC), and average precision (AP) were calculated. The diagnostic performances of novel DLNs were compared with those of two radiologists with different experience. RESULTS The ResNet34 v2 model had higher specificity (76.81%) and PPV (82.22%) than the other two, the ResNet50 v2 model had higher accuracy (78.11%) and NPV (72.86%), and the ResNet101 v2 model had higher sensitivity (85.00%). According to the AUCs and APs, the novel ResNet101 v2 model produced the best result (AUC 0.85 and AP 0.90) compared with the remaining five DLNs. Compared with the novice radiologist, the novel DLNs performed better. The F1 score was increased from 0.77 to 0.78, 0.81, and 0.82 by three novel DLNs. However, their diagnostic performance was worse than that of the experienced radiologist. CONCLUSIONS The novel DLNs performed better than traditional DLNs and may be helpful for novice radiologists to improve their diagnostic performance of breast cancer in ABUS. KEY POINTS • A novel automatic segmentation network to extract morphological information was successfully developed and implemented with ResNet deep learning networks. • The novel deep learning networks in our research performed better than the traditional deep learning networks in the diagnosis of breast cancer using ABUS images. • The novel deep learning networks in our research may be useful for novice radiologists to improve diagnostic performance.
Collapse
Affiliation(s)
- Qiucheng Wang
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - He Chen
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Gongning Luo
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin, Heilongjiang Province, China
| | - Bo Li
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Haitao Shang
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Hua Shao
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Shanshan Sun
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Zhongshuai Wang
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin, Heilongjiang Province, China
| | - Kuanquan Wang
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin, Heilongjiang Province, China
| | - Wen Cheng
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China.
| |
Collapse
|
58
|
Jiang J, Guo Y, Bi Z, Huang Z, Yu G, Wang J. Segmentation of prostate ultrasound images: the state of the art and the future directions of segmentation algorithms. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10179-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
59
|
A Multiscale Approach for Predicting Certain Effects of Hand-Transmitted Vibration on Finger Arteries. VIBRATION 2022. [DOI: 10.3390/vibration5020014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Prolonged exposure to strong hand-arm vibrations can lead to vascular disorders such as Vibration White Finger (VWF). We modeled the onset of this peripheral vascular disease in two steps. The first consists in assessing the reduction in shearing forces exerted by the blood on the walls of the arteries (Wall Shear Stress—WSS) during exposure to vibrations. An acute but repeated reduction in WSS can lead to arterial stenosis characteristic of VWF. The second step is devoted to using a numerical mechano-biological model to predict this stenosis as a function of WSS. WSS is reduced by a factor of 3 during exposure to vibration of 40 m·s−2. This reduction is independent of the frequency of excitation between 31 Hz and 400 Hz. WSS decreases logarithmically when the amplitude of the vibration increases. The mechano-biological model simulated arterial stenosis of 30% for an employee exposed for 4 h a day for 10 years. This model also highlighted the chronic accumulation of matrix metalloproteinase 2. By considering daily exposure and the vibratory level, we can calculate the degree of stenosis, thus that of the disease for chronic exposure to vibrations.
Collapse
|
60
|
The Accuracy and Radiomics Feature Effects of Multiple U-net-Based Automatic Segmentation Models for Transvaginal Ultrasound Images of Cervical Cancer. J Digit Imaging 2022; 35:983-992. [PMID: 35355160 PMCID: PMC9485324 DOI: 10.1007/s10278-022-00620-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 10/21/2021] [Accepted: 03/11/2022] [Indexed: 10/18/2022] Open
Abstract
Ultrasound (US) imaging has been recognized and widely used as a screening and diagnostic imaging modality for cervical cancer all over the world. However, few studies have investigated the U-net-based automatic segmentation models for cervical cancer on US images and investigated the effects of automatic segmentations on radiomics features. A total of 1102 transvaginal US images from 796 cervical cancer patients were collected and randomly divided into training (800), validation (100) and test sets (202), respectively, in this study. Four U-net models (U-net, U-net with ResNet, context encoder network (CE-net), and Attention U-net) were adapted to segment the target of cervical cancer automatically on these US images. Radiomics features were extracted and evaluated from both manually and automatically segmented area. The mean Dice similarity coefficient (DSC) of U-net, Attention U-net, CE-net, and U-net with ResNet were 0.88, 0.89, 0.88, and 0.90, respectively. The average Pearson coefficients for the evaluation of the reliability of US image-based radiomics were 0.94, 0.96, 0.94, and 0.95 for U-net, U-net with ResNet, Attention U-net, and CE-net, respectively, in their comparison with manual segmentation. The reproducibility of the radiomics parameters evaluated by intraclass correlation coefficients (ICC) showed robustness of automatic segmentation with an average ICC coefficient of 0.99. In conclusion, high accuracy of U-net-based automatic segmentations was achieved in delineating the target area of cervical cancer US images. It is feasible and reliable for further radiomics studies with features extracted from automatic segmented target areas.
Collapse
|
61
|
Coarse label refinement for improving prostate cancer detection in ultrasound imaging. Int J Comput Assist Radiol Surg 2022; 17:841-847. [PMID: 35344123 DOI: 10.1007/s11548-022-02606-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 03/09/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Ultrasound-guided biopsy plays a major role in prostate cancer (PCa) detection, yet is limited by a high rate of false negatives and low diagnostic yield of the current systematic, non-targeted approaches. Developing machine learning models for accurately identifying cancerous tissue in ultrasound would help sample tissues from regions with higher cancer likelihood. A plausible approach for this purpose is to use individual ultrasound signals corresponding to a core as inputs and consider the histopathology diagnosis for the entire core as labels. However, this introduces significant amount of label noise to training and degrades the classification performance. Previously, we suggested that histopathology-reported cancer involvement can be a reasonable approximation for the label noise. METHODS Here, we propose an involvement-based label refinement (iLR) method to correct corrupted labels and improve cancer classification. The difference between predicted and true cancer involvements is used to guide the label refinement process. We further incorporate iLR into state-of-the-art methods for learning with noisy labels and predicting cancer involvement. RESULTS We use 258 biopsy cores from 70 patients and demonstrate that our proposed label refinement method improves the performance of multiple noise-tolerant approaches and achieves a balanced accuracy, correlation coefficient, and mean absolute error of 76.7%, 0.68, and 12.4, respectively. CONCLUSIONS Our key contribution is to leverage a data-centric method to deal with noisy labels using histopathology reports, and improve the performance of prostate cancer diagnosis through a hierarchical training process with label refinement.
Collapse
|
62
|
Autonomous Prostate Segmentation in 2D B-Mode Ultrasound Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12062994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Prostate brachytherapy is a treatment for prostate cancer; during the planning of the procedure, ultrasound images of the prostate are taken. The prostate must be segmented out in each of the ultrasound images, and to assist with the procedure, an autonomous prostate segmentation algorithm is proposed. The prostate contouring system presented here is based on a novel superpixel algorithm, whereby pixels in the ultrasound image are grouped into superpixel regions that are optimized based on statistical similarity measures, so that the various structures within the ultrasound image can be differentiated. An active shape prostate contour model is developed and then used to delineate the prostate within the image based on the superpixel regions. Before segmentation, this contour model was fit to a series of point-based clinician-segmented prostate contours exported from conventional prostate brachytherapy planning software to develop a statistical model of the shape of the prostate. The algorithm was evaluated on nine sets of in vivo prostate ultrasound images and compared with manually segmented contours from a clinician, where the algorithm had an average volume difference of 4.49 mL or 10.89%.
Collapse
|
63
|
Du G, Zhan Y, Zhang Y, Guo J, Chen X, Liang J, Zhao H. Automated segmentation of the gastrocnemius and soleus in shank ultrasound images through deep residual neural network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
64
|
Punn NS, Agarwal S. Modality specific U-Net variants for biomedical image segmentation: a survey. Artif Intell Rev 2022; 55:5845-5889. [PMID: 35250146 PMCID: PMC8886195 DOI: 10.1007/s10462-022-10152-1] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/09/2022] [Indexed: 02/06/2023]
Abstract
With the advent of advancements in deep learning approaches, such as deep convolution neural network, residual neural network, adversarial network; U-Net architectures are most widely utilized in biomedical image segmentation to address the automation in identification and detection of the target regions or sub-regions. In recent studies, U-Net based approaches have illustrated state-of-the-art performance in different applications for the development of computer-aided diagnosis systems for early diagnosis and treatment of diseases such as brain tumor, lung cancer, alzheimer, breast cancer, etc., using various modalities. This article contributes in presenting the success of these approaches by describing the U-Net framework, followed by the comprehensive analysis of the U-Net variants by performing (1) inter-modality, and (2) intra-modality categorization to establish better insights into the associated challenges and solutions. Besides, this article also highlights the contribution of U-Net based frameworks in the ongoing pandemic, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) also known as COVID-19. Finally, the strengths and similarities of these U-Net variants are analysed along with the challenges involved in biomedical image segmentation to uncover promising future research directions in this area.
Collapse
|
65
|
Wu H, Liu J, Xiao F, Wen Z, Cheng L, Qin J. Semi-supervised Segmentation of Echocardiography Videos via Noise-resilient Spatiotemporal Semantic Calibration and Fusion. Med Image Anal 2022; 78:102397. [DOI: 10.1016/j.media.2022.102397] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 01/14/2022] [Accepted: 02/18/2022] [Indexed: 10/19/2022]
|
66
|
Ning Z, Zhong S, Feng Q, Chen W, Zhang Y. SMU-Net: Saliency-Guided Morphology-Aware U-Net for Breast Lesion Segmentation in Ultrasound Image. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:476-490. [PMID: 34582349 DOI: 10.1109/tmi.2021.3116087] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Deep learning methods, especially convolutional neural networks, have been successfully applied to lesion segmentation in breast ultrasound (BUS) images. However, pattern complexity and intensity similarity between the surrounding tissues (i.e., background) and lesion regions (i.e., foreground) bring challenges for lesion segmentation. Considering that such rich texture information is contained in background, very few methods have tried to explore and exploit background-salient representations for assisting foreground segmentation. Additionally, other characteristics of BUS images, i.e., 1) low-contrast appearance and blurry boundary, and 2) significant shape and position variation of lesions, also increase the difficulty in accurate lesion segmentation. In this paper, we present a saliency-guided morphology-aware U-Net (SMU-Net) for lesion segmentation in BUS images. The SMU-Net is composed of a main network with an additional middle stream and an auxiliary network. Specifically, we first propose generation of saliency maps which incorporate both low-level and high-level image structures, for foreground and background. These saliency maps are then employed to guide the main network and auxiliary network for respectively learning foreground-salient and background-salient representations. Furthermore, we devise an additional middle stream which basically consists of background-assisted fusion, shape-aware, edge-aware and position-aware units. This stream receives the coarse-to-fine representations from the main network and auxiliary network for efficiently fusing the foreground-salient and background-salient features and enhancing the ability of learning morphological information for network. Extensive experiments on five datasets demonstrate higher performance and superior robustness to the scale of dataset than several state-of-the-art deep learning approaches in breast lesion segmentation in ultrasound image.
Collapse
|
67
|
Song R, Zhu C, Zhang L, Zhang T, Luo Y, Liu J, Yang J. Dual-branch network via pseudo-label training for thyroid nodule detection in ultrasound image. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02967-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
68
|
Exploring the Age Effects on European Portuguese Vowel Production: An Ultrasound Study. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
For aging speech, there is limited knowledge regarding the articulatory adjustments underlying the acoustic findings observed in previous studies. In order to investigate the age-related articulatory differences in European Portuguese (EP) vowels, the present study analyzes the tongue configuration of the nine EP oral vowels (isolated context and pseudoword context) produced by 10 female speakers of two different age groups (young and old). From the tongue contours automatically segmented from the US images and manually revised, the parameters (tongue height and tongue advancement) were extracted. The results suggest that the tongue tends to be higher and more advanced for the older females compared to the younger ones for almost all vowels. Thus, the vowel articulatory space tends to be higher, advanced, and bigger with age. For older females, unlike younger females that presented a sharp reduction in the articulatory vowel space in disyllabic sequences, the vowel space tends to be more advanced for isolated vowels compared with vowels produced in disyllabic sequences. This study extends our pilot research by reporting articulatory data from more speakers based on an improved automatic method of tongue contours tracing, and it performs an inter-speaker comparison through the application of a novel normalization procedure.
Collapse
|
69
|
Deeply-Supervised 3D Convolutional Neural Networks for Automated Ovary and Follicle Detection from Ultrasound Volumes. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031246] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Automated detection of ovarian follicles in ultrasound images is much appreciated when its effectiveness is comparable with the experts’ annotations. Today’s best methods estimate follicles notably worse than the experts. This paper describes the development of two-stage deeply-supervised 3D Convolutional Neural Networks (CNN) based on the established U-Net. Either the entire U-Net or specific parts of the U-Net decoder were replicated in order to integrate the prior knowledge into the detection. Methods were trained end-to-end by follicle detection, while transfer learning was employed for ovary detection. The USOVA3D database of annotated ultrasound volumes, with its verification protocol, was used to verify the effectiveness. In follicle detection, the proposed methods estimate follicles up to 2.9% more accurately than the compared methods. With our two-stage CNNs trained by transfer learning, the effectiveness of ovary detection surpasses the up-to-date automated detection methods by about 7.6%. The obtained results demonstrated that our methods estimate follicles only slightly worse than the experts, while the ovaries are detected almost as accurately as by the experts. Statistical analysis of 50 repetitions of CNN model training proved that the training is stable, and that the effectiveness improvements are not only due to random initialisation. Our deeply-supervised 3D CNNs can be adapted easily to other problem domains.
Collapse
|
70
|
|
71
|
Martorell A, Martin-Gorgojo A, Ríos-Viñuela E, Rueda-Carnero J, Alfageme F, Taberner R. [Translated article] Artificial intelligence in dermatology: A threat or an opportunity? ACTAS DERMO-SIFILIOGRAFICAS 2022. [DOI: 10.1016/j.ad.2021.07.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/09/2022] Open
|
72
|
Martorell A, Martin-Gorgojo A, Ríos-Viñuela E, Rueda-Carnero J, Alfageme F, Taberner R. Inteligencia artificial en dermatología: ¿amenaza u oportunidad? ACTAS DERMO-SIFILIOGRAFICAS 2022; 113:30-46. [DOI: 10.1016/j.ad.2021.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Accepted: 07/18/2021] [Indexed: 11/25/2022] Open
|
73
|
Teng Y, Ai Y, Liang T, Yu B, Jin J, Xie C, Jin X. The Effects of Automatic Segmentations on Preoperative Lymph Node Status Prediction Models With Ultrasound Radiomics for Patients With Early Stage Cervical Cancer. Technol Cancer Res Treat 2022; 21:15330338221099396. [PMID: 35522305 PMCID: PMC9082739 DOI: 10.1177/15330338221099396] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 04/02/2022] [Accepted: 04/19/2022] [Indexed: 11/15/2022] Open
Abstract
Introduction: The purpose of this study is to investigate the effects of automatic segmentation algorithms on the performance of ultrasound (US) radiomics models in predicting the status of lymph node metastasis (LNM) for patients with early stage cervical cancer preoperatively. Methods: US images of 148 cervical cancer patients were collected and manually contoured by two senior radiologists. The four deep learning-based automatic segmentation models, namely U-net, context encoder network (CE-net), Resnet, and attention U-net were constructed to segment the tumor volumes automatically. Radiomics features were extracted and selected from manual and automatically segmented regions of interest (ROIs) to predict the LNM of these cervical cancer patients preoperatively. The reliability and reproducibility of radiomics features and the performances of prediction models were evaluated. Results: A total of 449 radiomics features were extracted from manual and automatic segmented ROIs with Pyradiomics. Features with an intraclass coefficient (ICC) > 0.9 were all 257 (57.2%) from manual and automatic segmented contours. The area under the curve (AUCs) of validation models with radiomics features extracted from manual, attention U-net, CE-net, Resnet, and U-net were 0.692, 0.755, 0.696, 0.689, and 0.710, respectively. Attention U-net showed best performance in the LNM prediction model with a lowest discrepancy between training and validation. The AUCs of models with automatic segmentation features from attention U-net, CE-net, Resnet, and U-net were 9.11%, 0.58%, -0.44%, and 2.61% higher than AUC of model with manual contoured features, respectively. Conclusion: The reliability and reproducibility of radiomics features, as well as the performance of radiomics models, were affected by manual segmentation and automatic segmentations.
Collapse
Affiliation(s)
- Yinyan Teng
- Department of Ultrasound imaging, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People’s Republic of China
| | - Yao Ai
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People’s Republic of China
| | - Tao Liang
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People’s Republic of China
| | - Bing Yu
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People’s Republic of China
| | - Juebin Jin
- Department of Medical Engineering, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People’s Republic of China
| | - Congying Xie
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People’s Republic of China
- Department of Radiation and Medical Oncology, Wenzhou Medical University Second Affiliated Hospital, Wenzhou, People’s Republic of China
| | - Xiance Jin
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People’s Republic of China
- School of Basic Medical Science, Wenzhou Medical University, Wenzhou, People’s Republic of China
| |
Collapse
|
74
|
Noe L C, Settembre N. Assessing mechanical vibration-altered wall shear stress in digital arteries. J Biomech 2021; 131:110893. [PMID: 34953283 DOI: 10.1016/j.jbiomech.2021.110893] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 11/26/2021] [Accepted: 11/28/2021] [Indexed: 02/08/2023]
Abstract
The aim of this study is to implement and validate a method for assessing acute vibration-altered Wall Shear Stress (WSS) in the proper volar digital artery of the non-exposed left forefinger when subjecting the right hand to mechanical vibration. These changes of WSS may be involved in Vibration White Finger. Hence, an experimental device was set-up to link a vibration shaker and an ultra-high frequency ultrasound scanner. The Womersley-based WSS was computed by picking up the maximum velocity from pulse Wave Doppler measurements and extracting the artery diameter from B-mode images through an in-house image processing technique. The parameters of the former method were optimised on numerical ultrasound phantoms of cylindrical and lifelike arteries. These phantoms were computed with the FIELD II and FOCUS platforms which mimicked our true ultrasound device. The Womersley-based WSS were compared to full Fluid Structure Interaction (FSI) and rigid wall models built from resonance magnetic images of a volunteer-specific forefinger artery. Our FSI model took into account the artery's surrounding tissues. The diameter computing procedure led to a bias of 4%. The Womersley-based WSS resulted in misestimating the FSI model by roughly 10% to 20%. No difference was found between the rigid wall computational model and FSI simulations. Regarding the WSS measured on a group of 20 volunteers, the group-averaged basal value was 3 Pa, while the vibration-altered WSS was reduced to 1 Pa, possibly triggering intimal hyperplasia mechanisms and leading to the arterial stenoses encountered in patients suffering from vibration-induced Raynaud's syndrome.
Collapse
Affiliation(s)
- Christophe Noe L
- Electromagnetism, Vibration, Optics Laboratory, Institut national de recherche et de sécurité (INRS), Vandœuvre,-lès-Nancy, France.
| | - Nicla Settembre
- Department of Vascular Surgery, Nancy University Hospital, University of Lorraine, France
| |
Collapse
|
75
|
Upton R, Mumith A, Beqiri A, Parker A, Hawkes W, Gao S, Porumb M, Sarwar R, Marques P, Markham D, Kenworthy J, O'Driscoll JM, Hassanali N, Groves K, Dockerill C, Woodward W, Alsharqi M, McCourt A, Wilkes EH, Heitner SB, Yadava M, Stojanovski D, Lamata P, Woodward G, Leeson P. Automated Echocardiographic Detection of Severe Coronary Artery Disease Using Artificial Intelligence. JACC Cardiovasc Imaging 2021; 15:715-727. [PMID: 34922865 DOI: 10.1016/j.jcmg.2021.10.013] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 10/01/2021] [Accepted: 10/21/2021] [Indexed: 01/27/2023]
Abstract
OBJECTIVES The purpose of this study was to establish whether an artificially intelligent (AI) system can be developed to automate stress echocardiography analysis and support clinician interpretation. BACKGROUND Coronary artery disease is the leading global cause of mortality and morbidity and stress echocardiography remains one of the most commonly used diagnostic imaging tests. METHODS An automated image processing pipeline was developed to extract novel geometric and kinematic features from stress echocardiograms collected as part of a large, United Kingdom-based prospective, multicenter, multivendor study. An ensemble machine learning classifier was trained, using the extracted features, to identify patients with severe coronary artery disease on invasive coronary angiography. The model was tested in an independent U.S. STUDY How availability of an AI classification might impact clinical interpretation of stress echocardiograms was evaluated in a randomized crossover reader study. RESULTS Acceptable classification accuracy for identification of patients with severe coronary artery disease in the training data set was achieved on cross-fold validation based on 31 unique geometric and kinematic features, with a specificity of 92.7% and a sensitivity of 84.4%. This accuracy was maintained in the independent validation data set. The use of the AI classification tool by clinicians increased inter-reader agreement and confidence as well as sensitivity for detection of disease by 10% to achieve an area under the receiver-operating characteristic curve of 0.93. CONCLUSION Automated analysis of stress echocardiograms is possible using AI and provision of automated classifications to clinicians when reading stress echocardiograms could improve accuracy, inter-reader agreement, and reader confidence.
Collapse
Affiliation(s)
- Ross Upton
- Ultromics Ltd, Oxford, United Kingdom; Cardiovascular Clinical Research Facility, RDM Division of Cardiovascular Medicine, University of Oxford, Oxford, United Kingdom
| | | | | | | | | | - Shan Gao
- Ultromics Ltd, Oxford, United Kingdom
| | | | - Rizwan Sarwar
- Cardiovascular Clinical Research Facility, RDM Division of Cardiovascular Medicine, University of Oxford, Oxford, United Kingdom
| | | | | | | | - Jamie M O'Driscoll
- Ultromics Ltd, Oxford, United Kingdom; School of Human and Life Sciences, Canterbury Christ Church University, Kent, United Kingdom
| | | | | | - Cameron Dockerill
- Cardiovascular Clinical Research Facility, RDM Division of Cardiovascular Medicine, University of Oxford, Oxford, United Kingdom
| | - William Woodward
- Cardiovascular Clinical Research Facility, RDM Division of Cardiovascular Medicine, University of Oxford, Oxford, United Kingdom
| | - Maryam Alsharqi
- Cardiovascular Clinical Research Facility, RDM Division of Cardiovascular Medicine, University of Oxford, Oxford, United Kingdom
| | - Annabelle McCourt
- Cardiovascular Clinical Research Facility, RDM Division of Cardiovascular Medicine, University of Oxford, Oxford, United Kingdom
| | | | - Stephen B Heitner
- Knight Cardiovascular Institute, Oregon Health & Science University, Portland Oregon, USA
| | - Mrinal Yadava
- Knight Cardiovascular Institute, Oregon Health & Science University, Portland Oregon, USA
| | - David Stojanovski
- Department of Imaging Sciences and Biomedical Engineering, King's College London, London, United Kingdom
| | - Pablo Lamata
- Department of Imaging Sciences and Biomedical Engineering, King's College London, London, United Kingdom
| | | | - Paul Leeson
- Ultromics Ltd, Oxford, United Kingdom; Cardiovascular Clinical Research Facility, RDM Division of Cardiovascular Medicine, University of Oxford, Oxford, United Kingdom.
| |
Collapse
|
76
|
Liang Z, Zhang S, Wu J, Li X, Zhuang Z, Feng Q, Chen W, Qi L. Automatic 3-D segmentation and volumetric light fluence correction for photoacoustic tomography based on optimal 3-D graph search. Med Image Anal 2021; 75:102275. [PMID: 34800786 DOI: 10.1016/j.media.2021.102275] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 10/11/2021] [Accepted: 10/15/2021] [Indexed: 01/29/2023]
Abstract
Preclinical imaging with photoacoustic tomography (PAT) has attracted wide attention in recent years since it is capable of providing molecular contrast with deep imaging depth. The automatic extraction and segmentation of the animal in PAT images is crucial for improving image analysis efficiency and enabling advanced image post-processing, such as light fluence (LF) correction for quantitative PAT imaging. Previous automatic segmentation methods are mostly two-dimensional approaches, which failed to conserve the 3-D surface continuity because the image slices were processed separately. This discontinuity problem further hampers LF correction, which, ideally, should be carried out in 3-D due to spatially diffused illumination. Here, to solve these problems, we propose a volumetric auto-segmentation method for small animal PAT imaging based on the 3-D optimal graph search (3-D GS) algorithm. The 3-D GS algorithm takes into account the relation among image slices by constructing a 3-D node-weighted directed graph, and thus ensures surface continuity. In view of the characteristics of PAT images, we improve the original 3-D GS algorithm on graph construction, solution guidance and cost assignment, such that the accuracy and smoothness of the segmented animal surface were guaranteed. We tested the performance of the proposed method by conducting in vivo nude mice imaging experiments with a commercial preclinical cross-sectional PAT system. The results showed that our method successfully retained the continuous global surface structure of the whole 3-D animal body, as well as smooth local subcutaneous tumor boundaries at different development stages. Moreover, based on the 3-D segmentation result, we were able to simulate volumetric LF distribution of the entire animal body and obtained LF corrected PAT images with enhanced structural visibility and uniform image intensity.
Collapse
Affiliation(s)
- Zhichao Liang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Shuangyang Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Jian Wu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Xipan Li
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Zhijian Zhuang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Wufan Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Li Qi
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China.
| |
Collapse
|
77
|
Martorell A, Martin-Gorgojo A, Ríos-Viñuela E, Rueda-Carnero J, Alfageme F, Taberner R. Artificial intelligence in dermatology: A threat or an opportunity? ACTAS DERMO-SIFILIOGRAFICAS 2021. [DOI: 10.1016/j.adengl.2021.11.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
|
78
|
Brindise MC, Meyers BA, Kutty S, Vlachos PP. Automated peak prominence-based iterative Dijkstras algorithm for segmentation of B-mode echocardiograms. IEEE Trans Biomed Eng 2021; 69:1595-1607. [PMID: 34714729 DOI: 10.1109/tbme.2021.3123612] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
We present a user-initialized, automated segmentation method for use with echocardiograms (echo). The method uses an iterative Dijkstra's algorithm, a strategic node selection, and a novel cost matrix formulation based on intensity peak prominence, termed the Prominence Iterative Dijkstras algorithm, or ProID. ProID is initialized with three user-input clicks per time-series scan. ProID was tested using artificial echo images representing five different systems. Results showed accurate LV contours and volume estimations as compared to the ground-truth for all systems. Using the CAMUS dataset, we demonstrate ProID maintained similar Dice similarity scores to other automated methods. ProID was then used to analyze a clinical cohort of 66 pediatric patients, including normal and diseased hearts. Output segmentations, end-diastolic, end-systolic volumes, and ejection fraction were compared against manual segmentations from two expert readers. ProID maintained an average Dice score of 0.93 when comparing against manual segmentation. Comparing the two expert readers, the manual segmentations maintained a score of 0.93 which increased to 0.95 when they used ProID. Thus, ProID reduced the inter-operator variability across the expert readers. Overall, this work demonstrates ProID yields accurate boundaries across age groups, disease states, and echo platforms with low computational cost and no need for training data.
Collapse
|
79
|
An S, Zhu H, Wang Y, Zhou F, Zhou X, Yang X, Zhang Y, Liu X, Jiao Z, He Y. A category attention instance segmentation network for four cardiac chambers segmentation in fetal echocardiography. Comput Med Imaging Graph 2021; 93:101983. [PMID: 34610500 DOI: 10.1016/j.compmedimag.2021.101983] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 07/28/2021] [Accepted: 08/19/2021] [Indexed: 11/27/2022]
Abstract
Fetal echocardiography is an essential and comprehensive examination technique for the detection of fetal heart anomalies. Accurate cardiac chambers segmentation can assist cardiologists to analyze cardiac morphology and facilitate heart disease diagnosis. Previous research mainly focused on the segmentation of single cardiac chambers, such as left ventricle (LV) segmentation or left atrium (LA) segmentation. We propose a generic framework based on instance segmentation to segment the four cardiac chambers accurately and simultaneously. The proposed Category Attention Instance Segmentation Network (CA-ISNet) has three branches: a category branch for predicting the semantic category, a mask branch for segmenting the cardiac chambers, and a category attention branch for learning category information of instances. The category attention branch is used to correct instance misclassification of the category branch. In our collected dataset, which contains echocardiography images with four-chamber views of 319 fetuses, experimental results show our method can achieve superior segmentation performance against state-of-the-art methods. Specifically, using fivefold cross-validation, our model achieves Dice coefficients of 0.7956, 0.7619, 0.8199, 0.7470 for the four cardiac chambers, and with an average precision of 45.64%.
Collapse
Affiliation(s)
- Shan An
- State Key Lab of Software Development Environment, Beihang University, Beijing 100191, China
| | - Haogang Zhu
- State Key Lab of Software Development Environment, Beihang University, Beijing 100191, China; Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beijing 100191, China.
| | - Yuanshuai Wang
- College of Sciences, Northeastern University, Shenyang 110819, China
| | - Fangru Zhou
- State Key Lab of Software Development Environment, Beihang University, Beijing 100191, China
| | - Xiaoxue Zhou
- Beijing Anzhen Hospital affiliated to Capital Medical University, Beijing 100029, China
| | - Xu Yang
- Beijing Anzhen Hospital affiliated to Capital Medical University, Beijing 100029, China
| | - Yingying Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China
| | - Xiangyu Liu
- School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China
| | - Zhicheng Jiao
- Perelman School of Medicine at University of Pennsylvania, PA, USA
| | - Yihua He
- Beijing Anzhen Hospital affiliated to Capital Medical University, Beijing 100029, China; Beijing Lab for Cardiovascular Precision Medicine, Beijing, China.
| |
Collapse
|
80
|
Zhu F, Liu M, Wang F, Qiu D, Li R, Dai C. Automatic measurement of fetal femur length in ultrasound images: a comparison of random forest regression model and SegNet. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2021; 18:7790-7805. [PMID: 34814276 DOI: 10.3934/mbe.2021387] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The aim of this work is the preliminary clinical validation and accuracy evaluation of our automatic algorithms in assessing progression fetal femur length (FL) in ultrasound images. To compare the random forest regression model with the SegNet model from the two aspects of accuracy and robustness. In this study, we proposed a traditional machine learning method to detect the endpoints of FL based on a random forest regression model. Deep learning methods based on SegNet were proposed for the automatic measurement method of FL, which utilized skeletonization processing and improvement of the full convolution network. Then the automatic measurement results of the two methods were evaluated quantitatively and qualitatively with the results marked by doctors. 436 ultrasonic fetal femur images were evaluated by the two methods above. Compared the results of the above three methods with doctor's manual annotations, the automatic measurement method of femur length based on the random forest regression model was 1.23 ± 4.66 mm and the method based on SegNet was 0.46 ± 2.82 mm. The indicator for evaluating distance was significantly lower than the previous literature. Measurement method based SegNet performed better in the case of femoral end adhesion, low contrast, and noise interference similar to the shape of the femur. The segNet-based method achieves promising performance compared with the random forest regression model, which can improve the examination accuracy and robustness of the measurement of fetal femur length in ultrasound images.
Collapse
Affiliation(s)
- Fengcheng Zhu
- Department of Gynaecology and Obstetrics, the First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Mengyuan Liu
- Department of Gynaecology and Obstetrics, the First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Feifei Wang
- Anesthesiology department, the First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Di Qiu
- Department of Gynaecology and Obstetrics, the First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Ruiman Li
- Department of Gynaecology and Obstetrics, the First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Chenyang Dai
- Department of Gynaecology and Obstetrics, the First Affiliated Hospital of Jinan University, Guangzhou, China
| |
Collapse
|
81
|
Xu L, Gao S, Shi L, Wei B, Liu X, Zhang J, He Y. Exploiting Vector Attention and Context Prior for Ultrasound Image Segmentation. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.05.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
82
|
Angel-Raya E, Chalopin C, Avina-Cervantes JG, Cruz-Aceves I, Wein W, Lindner D. Segmentation of brain tumour in 3D Intraoperative Ultrasound imaging. Int J Med Robot 2021; 17:e2320. [PMID: 34405533 DOI: 10.1002/rcs.2320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 07/30/2021] [Accepted: 08/01/2021] [Indexed: 11/12/2022]
Abstract
BACKGROUND Intraoperative ultrasound (iUS), using a navigation system and preoperative magnetic resonance imaging (pMRI), supports the surgeon intraoperatively in identifying tumour margins. Therefore, visual tumour enhancement can be supported by efficient segmentation methods. METHODS A semi-automatic and two registration-based segmentation methods are evaluated to extract brain tumours from 3D-iUS data. The registration-based methods estimated the brain deformation after craniotomy based on pMRI and 3D-iUS data. Both approaches use the normalised gradient field and linear correlation of linear combinations metrics. Proposed methods were evaluated on 66 B-mode and contrast-mode 3D-iUS data with metastasis and glioblastoma. RESULTS The semi-automatic segmentation achieved superior results with dice similarity index (DSI) values between [85.34, 86.79]% and contour mean distance values between [1.05, 1.11] mm for both modalities and tumour classes. CONCLUSIONS Better segmentation results were obtained for metastasis detection than glioblastoma, preferring 3D-intraoperative B-mode over 3D-intraoperative contrast-mode.
Collapse
Affiliation(s)
- Erick Angel-Raya
- Engineering Division (DICIS), Department of Electronics Engineering, University of Guanajuato, Campus Irapuato-Salamanca, Salamanca, Mexico
| | - Claire Chalopin
- Innovation Center Computer Assisted Surgery (ICCAS), Leipzig University, Leipzig, Germany
| | - Juan Gabriel Avina-Cervantes
- Engineering Division (DICIS), Department of Electronics Engineering, University of Guanajuato, Campus Irapuato-Salamanca, Salamanca, Mexico
| | - Ivan Cruz-Aceves
- CONACYT - Centro de Investigación en, Matemáticas (CIMAT), Guanajuato, Mexico
| | | | - Dirk Lindner
- Department of Neurosurgery, University Hospital Leipzig, Leipzig, Germany
| |
Collapse
|
83
|
Sahli H, Ben Slama A, Mouelhi A, Soayeh N, Rachdi R, Sayadi M. A computer-aided method based on geometrical texture features for a precocious detection of fetal Hydrocephalus in ultrasound images. Technol Health Care 2021; 28:643-664. [PMID: 32200362 DOI: 10.3233/thc-191752] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUD Hydrocephalus is the most common anomaly of the fetal head characterized by an excessive accumulation of fluid in the brain processing. The diagnostic process of fetal heads using traditional evaluation techniques are generally time consuming and error prone. Usually, fetal head size is computed using an ultrasound (US) image around 20-22 weeks, which is the gestational age (GA). Biometrical measurements are extracted and compared with ground truth charts to identify normal or abnormal growth. METHODS In this paper, an attempt has been made to enhance the Hydrocephalus characterization process by extracting other geometrical and textural features to design an efficient recognition system. The superiority of this work consists of the reduced time processing and the complexity of standard automatic approaches for routine examination. This proposed method requires practical insidiousness of the precocious discovery of fetuses' malformation to alert the experts about the existence of abnormal outcome. The first task is devoted to a proposed pre-processing model using a standard filtering and a segmentation scheme using a modified Hough transform (MHT) to detect the region of interest. Indeed, the obtained clinical parameters are presented to the principal component analysis (PCA) model in order to obtain a reduced number of measures which are employed in the classification stage. RESULTS Thanks to the combination of geometrical and statistical features, the classification process provided an important ability and an interesting performance achieving more than 96% of accuracy to detect pathological subjects in premature ages. CONCLUSIONS The experimental results illustrate the success and the accuracy of the proposed classification method for a factual diagnostic of fetal head malformation.
Collapse
Affiliation(s)
- Hanene Sahli
- University of Tunis, ENSIT, LR13ES03 SIME, Tunis, Tunisia
| | - Amine Ben Slama
- University of Tunis El Manar, ISTMT, LR13ES07, LRBTM, Tunis, Tunisia
| | - Aymen Mouelhi
- University of Tunis, ENSIT, LR13ES03 SIME, Tunis, Tunisia
| | - Nesrine Soayeh
- Obstetrics, Gynecology and Reproductive Department, Military Hospital, Tunis, Tunisia
| | - Radhouane Rachdi
- Obstetrics, Gynecology and Reproductive Department, Military Hospital, Tunis, Tunisia
| | - Mounir Sayadi
- University of Tunis, ENSIT, LR13ES03 SIME, Tunis, Tunisia
| |
Collapse
|
84
|
Fang L, Zhang L, Yao Y. Integrating a learned probabilistic model with energy functional for ultrasound image segmentation. Med Biol Eng Comput 2021; 59:1917-1931. [PMID: 34383220 DOI: 10.1007/s11517-021-02411-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 07/03/2021] [Indexed: 11/26/2022]
Abstract
The segmentation of ultrasound (US) images is steadily growing in popularity, owing to the necessity of computer-aided diagnosis (CAD) systems and the advantages that this technique shows, such as safety and efficiency. The objective of this work is to separate the lesion from its background in US images. However, most US images contain poor quality, which is affected by the noise, ambiguous boundary, and heterogeneity. Moreover, the lesion region may be not salient amid the other normal tissues, which makes its segmentation a challenging problem. In this paper, an US image segmentation algorithm that combines the learned probabilistic model with energy functionals is proposed. Firstly, a learned probabilistic model based on the generalized linear model (GLM) reduces the false positives and increases the likelihood energy term of the lesion region. It yields a new probability projection that attracts the energy functional toward the desired region of interest. Then, boundary indicator and probability statistical-based energy functional are used to provide a reliable boundary for the lesion. Integrating probabilistic information into the energy functional framework can effectively overcome the impact of poor quality and further improve the accuracy of segmentation. To verify the performance of the proposed algorithm, 40 images are randomly selected in three databases for evaluation. The values of DICE coefficient, the Jaccard distance, root-mean-square error, and mean absolute error are 0.96, 0.91, 0.059, and 0.042, respectively. Besides, the initialization of the segmentation algorithm and the influence of noise are also analyzed. The experiment shows a significant improvement in performance. A. Description of the proposed paper. B. The main steps involved in the proposed method.
Collapse
Affiliation(s)
- Lingling Fang
- Department of Computing and Information Technology, Liaoning Normal University, Dalian City, Liaoning Province, China.
- Nanchang Institute of Technology, City, Nanchang, Jiangxi Province, China.
| | - Lirong Zhang
- Department of Computing and Information Technology, Liaoning Normal University, Dalian City, Liaoning Province, China
| | - Yibo Yao
- Department of Computing and Information Technology, Liaoning Normal University, Dalian City, Liaoning Province, China
| |
Collapse
|
85
|
Echocardiogram segmentation using active shape model and mean squared eigenvalue error. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102807] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
86
|
Honarvar Shakibaei Asli B, Zhao Y, Erkoyuncu JA. Motion blur invariant for estimating motion parameters of medical ultrasound images. Sci Rep 2021; 11:14312. [PMID: 34253807 PMCID: PMC8275601 DOI: 10.1038/s41598-021-93636-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 06/22/2021] [Indexed: 11/15/2022] Open
Abstract
High-quality medical ultrasound imaging is definitely concerning motion blur, while medical image analysis requires motionless and accurate data acquired by sonographers. The main idea of this paper is to establish some motion blur invariant in both frequency and moment domain to estimate the motion parameters of ultrasound images. We propose a discrete model of point spread function of motion blur convolution based on the Dirac delta function to simplify the analysis of motion invariant in frequency and moment domain. This model paves the way for estimating the motion angle and length in terms of the proposed invariant features. In this research, the performance of the proposed schemes is compared with other state-of-the-art existing methods of image deblurring. The experimental study performs using fetal phantom images and clinical fetal ultrasound images as well as breast scans. Moreover, to validate the accuracy of the proposed experimental framework, we apply two image quality assessment methods as no-reference and full-reference to show the robustness of the proposed algorithms compared to the well-known approaches.
Collapse
Affiliation(s)
- Barmak Honarvar Shakibaei Asli
- Centre for Life-Cycle Engineering and Management, School of Aerospace, Transport and Manufacturing, Cranfield University, Cranfield, Bedfordshire, MK43 0AL, UK. .,Czech Academy of Sciences, Institute of Information Theory and Automation, Pod vodárenskou věží 4, 18208, Prague 8, Czech Republic.
| | - Yifan Zhao
- Centre for Life-Cycle Engineering and Management, School of Aerospace, Transport and Manufacturing, Cranfield University, Cranfield, Bedfordshire, MK43 0AL, UK
| | - John Ahmet Erkoyuncu
- Centre for Life-Cycle Engineering and Management, School of Aerospace, Transport and Manufacturing, Cranfield University, Cranfield, Bedfordshire, MK43 0AL, UK
| |
Collapse
|
87
|
Weitzel WF, Rajaram N, Krishnamurthy VN, Hamilton J, Thelen BJ, Zheng Y, Morgan T, Funes-Lora MA, Yessayan L, Bishop B, Shih AJ. Sono-angiography for dialysis vascular access based on the freehand 2D ultrasound scanning. J Vasc Access 2021; 23:871-876. [PMID: 33971754 DOI: 10.1177/11297298211015066] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
INTRODUCTION Dialysis vascular access, preferably an autogenous arteriovenous fistula, remains an end stage renal disease (ESRD) patient's lifeline providing a means of connecting the patient to the dialysis machine. Once an access is created, the current gold standard of care for maintenance of vascular access is angiography and angioplasty to treat stenosis. While point of care 2D ultrasound has been used to detect access problems, we sought to reproduce angiographic results comparable to the gold standard angiogram (fistulogram) using ultrasound data acquired from a conventional 2D ultrasound scanner. METHODS A 2D ultrasound probe was used to acquire a series of cross sectional images of the vascular access including arteriovenous anastomosis of a subject with a radio-cephalic fistula. These 2D B-mode images were used for 3D vessel reconstruction by binary thresholding to categorize vascular versus non-vascular structures followed by standard image segmentation to select the structure representative of dialysis vascular access and morphologic filtering. Image processing was done using open source Python Software. RESULTS The open source software was able to: (1) view the gold standard fistulogram images, (2) reconstruct 2D planar images of the fistula from ultrasound data as viewed from the top, analogous to computerized tomography images, and (3) construct a 2D representation of vascular access similar to the angiogram. CONCLUSION We present a simple approach to obtain an angiogram-like representation of the vascular access from readily available, non-proprietary 2D ultrasound data in the point of care setting. While the sono-angiogram is not intended to replace angiography, it may be useful in providing 3D imaging at the point of care in the dialysis unit, outpatient clinic, or for pre-operative planning for interventional procedures. Future work will focus on improving the robustness and quality of the imaging data while preserving the straightforward freehand approach used for ultrasound data acquisition.
Collapse
Affiliation(s)
- William F Weitzel
- VA Ann Arbor Healthcare System, Ann Arbor, MI, USA.,Department of Internal Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Nirmala Rajaram
- VA Ann Arbor Healthcare System, Ann Arbor, MI, USA.,Department of Psychiatry, University of Michigan, Ann Arbor, MI, USA
| | - Venkataramu N Krishnamurthy
- VA Ann Arbor Healthcare System, Ann Arbor, MI, USA.,Departments of Radiology and Surgery, University of Michigan, Ann Arbor, MI, USA
| | - James Hamilton
- VA Ann Arbor Healthcare System, Ann Arbor, MI, USA.,Emerge Now Inc., Los Angeles, CA, USA
| | - Brian J Thelen
- VA Ann Arbor Healthcare System, Ann Arbor, MI, USA.,Department of Statistics, University of Michigan, Ann Arbor, MI, USA.,Michigan Tech Research Institute, Michigan Technological University, Ann Arbor, MI, USA
| | - Yihao Zheng
- VA Ann Arbor Healthcare System, Ann Arbor, MI, USA.,Department of Mechanical Engineering, Worcester Polytechnic Institute, Worcester, MA, USA
| | | | | | - Lenar Yessayan
- VA Ann Arbor Healthcare System, Ann Arbor, MI, USA.,Department of Internal Medicine, University of Michigan, Ann Arbor, MI, USA
| | | | - Albert J Shih
- Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
88
|
Lei Y, Fu Y, Roper J, Higgins K, Bradley JD, Curran WJ, Liu T, Yang X. Echocardiographic image multi-structure segmentation using Cardiac-SegNet. Med Phys 2021; 48:2426-2437. [PMID: 33655564 PMCID: PMC11698071 DOI: 10.1002/mp.14818] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 02/25/2021] [Accepted: 02/26/2021] [Indexed: 12/17/2022] Open
Abstract
PURPOSE Cardiac boundary segmentation of echocardiographic images is important for cardiac function assessment and disease diagnosis. However, it is challenging to segment cardiac ventricles due to the low contrast-to-noise ratio and speckle noise of the echocardiographic images. Manual segmentation is subject to interobserver variability and is too slow for real-time image-guided interventions. We aim to develop a deep learning-based method for automated multi-structure segmentation of echocardiographic images. METHODS We developed an anchor-free mask convolutional neural network (CNN), termed Cardiac-SegNet, which consists of three subnetworks, that is, a backbone, a fully convolutional one-state object detector (FCOS) head, and a mask head. The backbone extracts multi-level and multi-scale features from endocardium image. The FOCS head utilizes these features to detect and label the region-of-interests (ROIs) of the segmentation targets. Unlike the traditional mask regional CNN (Mask R-CNN) method, the FCOS head is anchor-free and can model the spatial relationship of the targets. The mask head utilizes a spatial attention strategy, which allows the network to highlight salient features to perform segmentation on each detected ROI. For evaluation, we investigated 450 patient datasets by a five-fold cross-validation and a hold-out test. The endocardium (LVEndo ) and epicardium (LVEpi ) of the left ventricle and left atrium (LA) were segmented and compared with manual contours using the Dice similarity coefficient (DSC), Hausdorff distance (HD), mean absolute distance (MAD), and center-of-mass distance (CMD). RESULTS Compared to U-Net and Mask R-CNN, our method achieved higher segmentation accuracy and fewer erroneous speckles. When our method was evaluated on a separate hold-out dataset at the end diastole (ED) and the end systole (ES) phases, the average DSC were 0.952 and 0.939 at ED and ES for the LVEndo , 0.965 and 0.959 at ED and ES for the LVEpi , and 0.924 and 0.926 at ED and ES for the LA. For patients with a typical image size of 549 × 788 pixels, the proposed method can perform the segmentation within 0.5 s. CONCLUSION We proposed a fast and accurate method to segment echocardiographic images using an anchor-free mask CNN.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Jeffrey D. Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
89
|
Rajajee V, Soroushmehr R, Williamson CA, Najarian K, Gryak J, Awad A, Ward KR, Tiba MH. Novel Algorithm for Automated Optic Nerve Sheath Diameter Measurement Using a Clustering Approach. Mil Med 2021; 186:496-501. [PMID: 32830251 DOI: 10.1093/milmed/usaa231] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 07/09/2020] [Accepted: 08/07/2020] [Indexed: 11/12/2022] Open
Abstract
INTRODUCTION Using ultrasound to measure optic nerve sheath diameter (ONSD) has been shown to be a useful modality to detect elevated intracranial pressure. However, manual assessment of ONSD by a human operator is cumbersome and prone to human errors. We aimed to develop and test an automated algorithm for ONSD measurement using ultrasound images and compare it to measurements performed by physicians. MATERIALS AND METHODS Patients were recruited from the Neurological Intensive Care Unit. Ultrasound images of the optic nerve sheath from both eyes were obtained using an ultrasound unit with an ocular preset. Images were processed by two attending physicians to calculate ONSD manually. The images were processed as well using a novel computerized algorithm that automatically analyzes ultrasound images and calculates ONSD. Algorithm-measured ONSD was compared with manually measured ONSD using multiple statistical measures. RESULTS Forty-four patients with an average/Standard Deviation (SD) intracranial pressure of 14 (9.7) mmHg were recruited and tested (with a range between 1 and 57 mmHg). A t-test showed no statistical difference between the ONSD from left and right eyes (P > 0.05). Furthermore, a paired t-test showed no significant difference between the manually and algorithm-measured ONSD with a mean difference (SD) of 0.012 (0.046) cm (P > 0.05) and percentage error of difference of 6.43% (P = 0.15). Agreement between the two operators was highly correlated (interclass correlation coefficient = 0.8, P = 0.26). Bland-Altman analysis revealed mean difference (SD) of 0.012 (0.046) (P = 0.303) and limits of agreement between -0.1 and 0.08. Receiver Operator Curve analysis yielded an area under the curve of 0.965 (P < 0.0001) with high sensitivity and specificity. CONCLUSION The automated image-analysis algorithm calculates ONSD reliably and with high precision when compared to measurements obtained by expert physicians. The algorithm may have a role in computer-aided decision support systems in acute brain injury.
Collapse
Affiliation(s)
- Venkatakrishna Rajajee
- Department of Neurological Surgery, University of Michigan, Ann Arbor, MI 48109-5338, USA.,Department of Neurology, University of Michigan, Ann Arbor, MI 48109-5316, USA.,Michigan Center for Integrative Research in Critical Care (MCIRCC), Ann Arbor, MI 48109-2800, USA
| | - Reza Soroushmehr
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI 48109-2218, USA.,Michigan Center for Integrative Research in Critical Care (MCIRCC), Ann Arbor, MI 48109-2800, USA
| | - Craig A Williamson
- Department of Neurological Surgery, University of Michigan, Ann Arbor, MI 48109-5338, USA.,Department of Neurology, University of Michigan, Ann Arbor, MI 48109-5316, USA.,Michigan Center for Integrative Research in Critical Care (MCIRCC), Ann Arbor, MI 48109-2800, USA
| | - Kayvan Najarian
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI 48109-2218, USA.,Department of Emergency Medicine, University of Michigan, Ann Arbor, MI 48109-2800, USA.,Michigan Center for Integrative Research in Critical Care (MCIRCC), Ann Arbor, MI 48109-2800, USA
| | - Jonathan Gryak
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI 48109-2218, USA.,Michigan Center for Integrative Research in Critical Care (MCIRCC), Ann Arbor, MI 48109-2800, USA
| | - Abdelrahman Awad
- Department of Emergency Medicine, University of Michigan, Ann Arbor, MI 48109-2800, USA.,Michigan Center for Integrative Research in Critical Care (MCIRCC), Ann Arbor, MI 48109-2800, USA
| | - Kevin R Ward
- Department of Emergency Medicine, University of Michigan, Ann Arbor, MI 48109-2800, USA.,Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109-2099, USA.,Michigan Center for Integrative Research in Critical Care (MCIRCC), Ann Arbor, MI 48109-2800, USA
| | - Mohamad H Tiba
- Department of Emergency Medicine, University of Michigan, Ann Arbor, MI 48109-2800, USA.,Michigan Center for Integrative Research in Critical Care (MCIRCC), Ann Arbor, MI 48109-2800, USA
| |
Collapse
|
90
|
Tong J, Li K, Lin W, Shudong X, Anwar A, Jiang L. Automatic lumen border detection in IVUS images using dictionary learning and kernel sparse representation. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
91
|
Lee K, Kim JY, Lee MH, Choi CH, Hwang JY. Imbalanced Loss-Integrated Deep-Learning-Based Ultrasound Image Analysis for Diagnosis of Rotator-Cuff Tear. SENSORS 2021; 21:s21062214. [PMID: 33809972 PMCID: PMC8005102 DOI: 10.3390/s21062214] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Revised: 03/08/2021] [Accepted: 03/11/2021] [Indexed: 12/19/2022]
Abstract
A rotator cuff tear (RCT) is an injury in adults that causes difficulty in moving, weakness, and pain. Only limited diagnostic tools such as magnetic resonance imaging (MRI) and ultrasound Imaging (UI) systems can be utilized for an RCT diagnosis. Although UI offers comparable performance at a lower cost to other diagnostic instruments such as MRI, speckle noise can occur the degradation of the image resolution. Conventional vision-based algorithms exhibit inferior performance for the segmentation of diseased regions in UI. In order to achieve a better segmentation for diseased regions in UI, deep-learning-based diagnostic algorithms have been developed. However, it has not yet reached an acceptable level of performance for application in orthopedic surgeries. In this study, we developed a novel end-to-end fully convolutional neural network, denoted as Segmentation Model Adopting a pRe-trained Classification Architecture (SMART-CA), with a novel integrated on positive loss function (IPLF) to accurately diagnose the locations of RCT during an orthopedic examination using UI. Using the pre-trained network, SMART-CA can extract remarkably distinct features that cannot be extracted with a normal encoder. Therefore, it can improve the accuracy of segmentation. In addition, unlike other conventional loss functions, which are not suited for the optimization of deep learning models with an imbalanced dataset such as the RCT dataset, IPLF can efficiently optimize the SMART-CA. Experimental results have shown that SMART-CA offers an improved precision, recall, and dice coefficient of 0.604% (+38.4%), 0.942% (+14.0%) and 0.736% (+38.6%) respectively. The RCT segmentation from a normal ultrasound image offers the improved precision, recall, and dice coefficient of 0.337% (+22.5%), 0.860% (+15.8%) and 0.484% (+28.5%), respectively, in the RCT segmentation from an ultrasound image with severe speckle noise. The experimental results demonstrated the IPLF outperforms other conventional loss functions, and the proposed SMART-CA optimized with the IPLF showed better performance than other state-of-the-art networks for the RCT segmentation with high robustness to speckle noise.
Collapse
Affiliation(s)
- Kyungsu Lee
- Information and Communication Engineering, Daegu Gyeongbuk Institute of Science and Technology, Daegu 42988, Korea; (K.L.); (M.H.L.)
| | - Jun Young Kim
- The Department of Orthopedic Surgery, School of Medicine, Catholic University, Daegu 42472, Korea;
| | - Moon Hwan Lee
- Information and Communication Engineering, Daegu Gyeongbuk Institute of Science and Technology, Daegu 42988, Korea; (K.L.); (M.H.L.)
| | - Chang-Hyuk Choi
- The Department of Orthopedic Surgery, School of Medicine, Catholic University, Daegu 42472, Korea;
- Correspondence: (C.-H.C.); (J.Y.H.)
| | - Jae Youn Hwang
- The Department of Orthopedic Surgery, School of Medicine, Catholic University, Daegu 42472, Korea;
- Correspondence: (C.-H.C.); (J.Y.H.)
| |
Collapse
|
92
|
McDermott C, Łącki M, Sainsbury B, Henry J, Filippov M, Rossa C. Sonographic Diagnosis of COVID-19: A Review of Image Processing for Lung Ultrasound. Front Big Data 2021; 4:612561. [PMID: 33748752 PMCID: PMC7968725 DOI: 10.3389/fdata.2021.612561] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Accepted: 01/14/2021] [Indexed: 12/24/2022] Open
Abstract
The sustained increase in new cases of COVID-19 across the world and potential for subsequent outbreaks call for new tools to assist health professionals with early diagnosis and patient monitoring. Growing evidence around the world is showing that lung ultrasound examination can detect manifestations of COVID-19 infection. Ultrasound imaging has several characteristics that make it ideally suited for routine use: small hand-held systems can be contained inside a protective sheath, making it easier to disinfect than X-ray or computed tomography equipment; lung ultrasound allows triage of patients in long term care homes, tents or other areas outside of the hospital where other imaging modalities are not available; and it can determine lung involvement during the early phases of the disease and monitor affected patients at bedside on a daily basis. However, some challenges still remain with routine use of lung ultrasound. Namely, current examination practices and image interpretation are quite challenging, especially for unspecialized personnel. This paper reviews how lung ultrasound (LUS) imaging can be used for COVID-19 diagnosis and explores different image processing methods that have the potential to detect manifestations of COVID-19 in LUS images. Then, the paper reviews how general lung ultrasound examinations are performed before addressing how COVID-19 manifests itself in the images. This will provide the basis to study contemporary methods for both segmentation and classification of lung ultrasound images. The paper concludes with a discussion regarding practical considerations of lung ultrasound image processing use and draws parallels between different methods to allow researchers to decide which particular method may be best considering their needs. With the deficit of trained sonographers who are working to diagnose the thousands of people afflicted by COVID-19, a partially or totally automated lung ultrasound detection and diagnosis tool would be a major asset to fight the pandemic at the front lines.
Collapse
Affiliation(s)
- Conor McDermott
- Faculty of Engineering and Applied Science, Ontario Tech University, Oshawa, ON, Canada
| | - Maciej Łącki
- Faculty of Engineering and Applied Science, Ontario Tech University, Oshawa, ON, Canada
| | | | | | | | - Carlos Rossa
- Faculty of Engineering and Applied Science, Ontario Tech University, Oshawa, ON, Canada
| |
Collapse
|
93
|
Li K, Tong J, Zhu X, Xia S. Automatic Lumen Border Detection in IVUS Images Using Deep Learning Model and Handcrafted Features. ULTRASONIC IMAGING 2021; 43:59-73. [PMID: 33448256 DOI: 10.1177/0161734620987288] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In the clinical analysis of Intravascular ultrasound (IVUS) images, the lumen size is an important indicator of coronary atherosclerosis, and is also the premise of coronary artery disease diagnosis and interventional treatment. In this study, a fully automatic method based on deep learning model and handcrafted features is presented for the detection of the lumen borders in IVUS images. First, 193 handcrafted features are extracted from the IVUS images. Then hybrid feature vectors are constructed by combining handcrafted features with 64 high-level features extracted from U-Net. In order to obtain the feature subsets with larger contribution, we employ the extended binary cuckoo search for feature selection. Finally, the selected 36-dimensional hybrid feature subset is used to classify the test images using dictionary learning based on kernel sparse coding. The proposed algorithm is tested on the publicly available dataset and evaluated using three indicators. Through ablation experiments, mean value of the experimental results (Jaccard: 0.88, Hausdorff distance: 0.36, Percentage of the area difference: 0.06) prove to be effective improving lumen border detection. Furthermore, compared with the recent methods used on the same dataset, the proposed method shows good performance and high accuracy.
Collapse
Affiliation(s)
- Kai Li
- Zhejiang Sci-Tech University, Hangzhou, China
| | - Jijun Tong
- Zhejiang Sci-Tech University, Hangzhou, China
| | - Xinjian Zhu
- Zhejiang University School of Medicine, Yiwu, China
| | - Shudong Xia
- Zhejiang University School of Medicine, Yiwu, China
| |
Collapse
|
94
|
Song C, Gao T, Wang H, Sudirman S, Zhang W, Zhu H. The Classification and Segmentation of Fetal Anatomies Ultrasound Image: A Survey. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3616] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Ultrasound imaging processing technology has been used in obstetric observation of the fetus and diagnosis of fetal diseases for more than half a century. It contains certain advantages and unique challenges which has been developed rapidly. From the perspective of ultrasound image analysis, at the very beginning, it is essential to determine fetal survival, gestational age and so on. Currently, the fetal anatomies ultrasound image analysis approaches have been studies and it has become an indispensable diagnostic tool for diagnosing fetal abnormalities, in order to gain more insight into the ongoing development of the fetus. Presently, it is the time to review previous approaches systematically in this field and to predict the directions of the future. Thus, this article reviews state-of-art approaches with the basic ideas, theories, pros and cons of ultrasound image technique for whole fetus with other anatomies. First of all, it summarizes the current pending problems and introduces the popular image processing methods, such as classification, segmentation etc. After that, the advantages and disadvantages in existing approaches as well as new research ideas are briefly discussed. Finally, the challenges and future trend are discussed.
Collapse
Affiliation(s)
- Chunlin Song
- State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
| | - Tao Gao
- Obstetrics and Gynecology, Wuxi People’s Hospital, Wuxi, Jiangsu, 214023, China
| | - Hong Wang
- BOE Technology Group Co. Ltd., Beijing, 100176, China
| | - Sud Sudirman
- Department of Computer Science, Liverpool John Moores University, Liverpool, L3 3AF, UK
| | - Wei Zhang
- BOE Technology Group Co. Ltd., Beijing, 100176, China
| | - Haogang Zhu
- State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
| |
Collapse
|
95
|
Chel H, Bora PK, Ramchiary KK. A fast technique for hyper-echoic region separation from brain ultrasound images using patch based thresholding and cubic B-spline based contour smoothing. ULTRASONICS 2021; 111:106304. [PMID: 33360770 DOI: 10.1016/j.ultras.2020.106304] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Revised: 11/14/2020] [Accepted: 11/14/2020] [Indexed: 06/12/2023]
Abstract
Ultrasound image guided brain surgery (UGBS) requires an automatic and fast image segmentation method. The level-set and active contour based algorithms have been found to be useful for obtaining topology-independent boundaries between different image regions. But slow convergence limits their use in online US image segmentation. The performance of these algorithms deteriorates on US images because of the intensity inhomogeneity. This paper proposes an effective region-driven method for the segmentation of hyper-echoic (HE) regions suppressing the hypo-echoic and anechoic regions in brain US images. An automatic threshold estimation scheme is developed with a modified Niblack's approach. The separation of the hyper-echoic and non-hyper-echoic (NHE) regions is performed by successively applying patch based intensity thresholding and boundary smoothing. First, a patch based segmentation is performed, which separates roughly the two regions. The patch based approach in this process reduces the effect of intensity heterogeneity within an HE region. An iterative boundary correction step with reducing patch size improves further the regional topology and refines the boundary regions. For avoiding the slope and curvature discontinuities and obtaining distinct boundaries between HE and NHE regions, a cubic B-spline model of curve smoothing is applied. The proposed method is 50-100 times faster than the other level-set based image segmentation algorithms. The segmentation performance and the convergence speed of the proposed method are compared with four other competing level-set based algorithms. The computational results show that the proposed segmentation approach outperforms other level-set based techniques both subjectively and objectively.
Collapse
Affiliation(s)
- Haradhan Chel
- Department of Electronics and Communication, Central Institute of Technology Kokrajhar, Assam 783370, India; City Clinic and Research Centre, Kokrajhar, Assam, India.
| | - P K Bora
- Department of EEE, Indian Institute of Technology Guwahati, Assam, India.
| | - K K Ramchiary
- City Clinic and Research Centre, Kokrajhar, Assam, India.
| |
Collapse
|
96
|
Lafci B, Mercep E, Morscher S, Dean-Ben XL, Razansky D. Deep Learning for Automatic Segmentation of Hybrid Optoacoustic Ultrasound (OPUS) Images. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:688-696. [PMID: 32894712 DOI: 10.1109/tuffc.2020.3022324] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The highly complementary information provided by multispectral optoacoustics and pulse-echo ultrasound have recently prompted development of hybrid imaging instruments bringing together the unique contrast advantages of both modalities. In the hybrid optoacoustic ultrasound (OPUS) combination, images retrieved by one modality may further be used to improve the reconstruction accuracy of the other. In this regard, image segmentation plays a major role as it can aid improving the image quality and quantification abilities by facilitating modeling of light and sound propagation through the imaged tissues and surrounding coupling medium. Here, we propose an automated approach for surface segmentation in whole-body mouse OPUS imaging using a deep convolutional neural network (CNN). The method has shown robust performance, attaining accurate segmentation of the animal boundary in both optoacoustic and pulse-echo ultrasound images, as evinced by quantitative performance evaluation using Dice coefficient metrics.
Collapse
|
97
|
Jin J, Zhu H, Zhang J, Ai Y, Zhang J, Teng Y, Xie C, Jin X. Multiple U-Net-Based Automatic Segmentations and Radiomics Feature Stability on Ultrasound Images for Patients With Ovarian Cancer. Front Oncol 2021; 10:614201. [PMID: 33680934 PMCID: PMC7930567 DOI: 10.3389/fonc.2020.614201] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 12/29/2020] [Indexed: 12/21/2022] Open
Abstract
Few studies have reported the reproducibility and stability of ultrasound (US) images based radiomics features obtained from automatic segmentation in oncology. The purpose of this study is to study the accuracy of automatic segmentation algorithms based on multiple U-net models and their effects on radiomics features from US images for patients with ovarian cancer. A total of 469 US images from 127 patients were collected and randomly divided into three groups: training sets (353 images), validation sets (23 images), and test sets (93 images) for automatic segmentation models building. Manual segmentation of target volumes was delineated as ground truth. Automatic segmentations were conducted with U-net, U-net++, U-net with Resnet as the backbone (U-net with Resnet), and CE-Net. A python 3.7.0 and package Pyradiomics 2.2.0 were used to extract radiomic features from the segmented target volumes. The accuracy of automatic segmentations was evaluated by Jaccard similarity coefficient (JSC), dice similarity coefficient (DSC), and average surface distance (ASD). The reliability of radiomics features were evaluated by Pearson correlation and intraclass correlation coefficients (ICC). CE-Net and U-net with Resnet outperformed U-net and U-net++ in accuracy performance by achieving a DSC, JSC, and ASD of 0.87, 0.79, 8.54, and 0.86, 0.78, 10.00, respectively. A total of 97 features were extracted from the delineated target volumes. The average Pearson correlation was 0.86 (95% CI, 0.83–0.89), 0.87 (95% CI, 0.84–0.90), 0.88 (95% CI, 0.86–0.91), and 0.90 (95% CI, 0.88–0.92) for U-net++, U-net, U-net with Resnet, and CE-Net, respectively. The average ICC was 0.84 (95% CI, 0.81–0.87), 0.85 (95% CI, 0.82–0.88), 0.88 (95% CI, 0.85–0.90), and 0.89 (95% CI, 0.86–0.91) for U-net++, U-net, U-net with Resnet, and CE-Net, respectively. CE-Net based segmentation achieved the best radiomics reliability. In conclusion, U-net based automatic segmentation was accurate enough to delineate the target volumes on US images for patients with ovarian cancer. Radiomics features extracted from automatic segmented targets showed good reproducibility and for reliability further radiomics investigations.
Collapse
Affiliation(s)
- Juebin Jin
- Department of Medical Engineering, Wenzhou Medical University First Affiliated Hospital, Wenzhou, China
| | - Haiyan Zhu
- Department of Gynecology, Shanghai First Maternal and Infant Hospital, Tongji University School of Medicine, Shanghai, China.,Department of Gynecology, Wenzhou Medical University First Affiliated Hospital, Wenzhou, China
| | - Jindi Zhang
- Department of Gynecology, Wenzhou Medical University First Affiliated Hospital, Wenzhou, China
| | - Yao Ai
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, China
| | - Ji Zhang
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, China
| | - Yinyan Teng
- Department of Ultrasound Imaging, Wenzhou Medical University First Affiliated Hospital, Wenzhou, China
| | - Congying Xie
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, China.,Department of Radiation and Medical Oncology, Wenzhou Medical University Second Affiliated Hospital, Wenzhou, China
| | - Xiance Jin
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, China
| |
Collapse
|
98
|
Ali Y, Janabi-Sharifi F, Beheshti S. Echocardiographic image segmentation using deep Res-U network. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102248] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
99
|
Marosán P, Szalai K, Csabai D, Csány G, Horváth A, Gyöngy M. Automated seeding for ultrasound skin lesion segmentation. ULTRASONICS 2021; 110:106268. [PMID: 33068826 DOI: 10.1016/j.ultras.2020.106268] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Revised: 09/30/2020] [Accepted: 09/30/2020] [Indexed: 06/11/2023]
Abstract
The segmentation of cancer-suspicious skin lesions using ultrasound may help their differential diagnosis and treatment planning. Active contour models (ACM) require an initial seed, which when manually chosen may cause variations in segmentation accuracy. Fully-automated skin segmentation typically employs layer-by-layer segmentation using a combination of methods; however, such segmentation has not yet been applied on cancerous lesions. In the current work, fully automated segmentation is achieved in two steps: an automated seeding (AS) step using a layer-by-layer method followed by a growing step using an ACM. The method was tested on images of nevi, melanomas, and basal cell carcinomas from two ultrasound imaging systems (N=60), with all lesions being successfully located. For the seeding step, manual seeding (MS) was used as a reference. AS approached the accuracy of MS when the latter used an optimal bounding rectangle based on the ground truth (Sørensen-Dice coefficient (SDC) of 72.3 vs 74.6, respectively). The effect of varying the manual seed was also investigated; a 0.7 decrease in seed height and width caused a mean SDC of 54.6. The results show the robustness of automated seeding for skin lesion segmentation.
Collapse
Affiliation(s)
- Péter Marosán
- Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, Práter u. 50/A, 1083, Budapest, Hungary; Dermus Kft., Kanizsai u. 2-10 C/11, Budapest, Hungary.
| | - Klára Szalai
- Department of Dermatology, Venerology and Dermatooncology, Semmelweis University, Mária u. 41, 1085 Budapest, Hungary.
| | - Domonkos Csabai
- Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, Práter u. 50/A, 1083, Budapest, Hungary; Dermus Kft., Kanizsai u. 2-10 C/11, Budapest, Hungary.
| | - Gergely Csány
- Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, Práter u. 50/A, 1083, Budapest, Hungary; Dermus Kft., Kanizsai u. 2-10 C/11, Budapest, Hungary.
| | - András Horváth
- Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, Práter u. 50/A, 1083, Budapest, Hungary.
| | - Miklós Gyöngy
- Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, Práter u. 50/A, 1083, Budapest, Hungary; Dermus Kft., Kanizsai u. 2-10 C/11, Budapest, Hungary.
| |
Collapse
|
100
|
Yasutomi S, Arakaki T, Matsuoka R, Sakai A, Komatsu R, Shozu K, Dozen A, Machino H, Asada K, Kaneko S, Sekizawa A, Hamamoto R, Komatsu M. Shadow Estimation for Ultrasound Images Using Auto-Encoding Structures and Synthetic Shadows. APPLIED SCIENCES 2021; 11:1127. [DOI: 10.3390/app11031127] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Acoustic shadows are common artifacts in medical ultrasound imaging. The shadows are caused by objects that reflect ultrasound such as bones, and they are shown as dark areas in ultrasound images. Detecting such shadows is crucial for assessing the quality of images. This will be a pre-processing for further image processing or recognition aiming computer-aided diagnosis. In this paper, we propose an auto-encoding structure that estimates the shadowed areas and their intensities. The model once splits an input image into an estimated shadow image and an estimated shadow-free image through its encoder and decoder. Then, it combines them to reconstruct the input. By generating plausible synthetic shadows based on relatively coarse domain-specific knowledge on ultrasound images, we can train the model using unlabeled data. If pixel-level labels of the shadows are available, we also utilize them in a semi-supervised fashion. By experiments on ultrasound images for fetal heart diagnosis, we show that our method achieved 0.720 in the DICE score and outperformed conventional image processing methods and a segmentation method based on deep neural networks. The capability of the proposed method on estimating the intensities of shadows and the shadow-free images is also indicated through the experiments.
Collapse
Affiliation(s)
- Suguru Yasutomi
- Artificial Intelligence Laboratory, Fujitsu Laboratories Ltd., 4-1-1 Kamikodanaka, Nakahara-Ku, Kawasaki, Kanagawa 211-8588, Japan
- RIKEN AIP-Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Department of Electronic and Information Engineering, Tokyo University of Agriculture and Technology, 2-24-16 Naka-cho, Koganei-shi, Tokyo 184-8588, Japan
| | - Tatsuya Arakaki
- Department of Obstetrics and Gynecology, Showa University School of Medicine, 1-5-8 Hatanodai, Shinagawa-Ku, Tokyo 142-8666, Japan
| | - Ryu Matsuoka
- RIKEN AIP-Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Department of Obstetrics and Gynecology, Showa University School of Medicine, 1-5-8 Hatanodai, Shinagawa-Ku, Tokyo 142-8666, Japan
| | - Akira Sakai
- Artificial Intelligence Laboratory, Fujitsu Laboratories Ltd., 4-1-1 Kamikodanaka, Nakahara-Ku, Kawasaki, Kanagawa 211-8588, Japan
- RIKEN AIP-Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Reina Komatsu
- RIKEN AIP-Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Department of Obstetrics and Gynecology, Showa University School of Medicine, 1-5-8 Hatanodai, Shinagawa-Ku, Tokyo 142-8666, Japan
| | - Kanto Shozu
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
| | - Ai Dozen
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
| | - Hidenori Machino
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Ken Asada
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Syuzo Kaneko
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Akihiko Sekizawa
- Department of Obstetrics and Gynecology, Showa University School of Medicine, 1-5-8 Hatanodai, Shinagawa-Ku, Tokyo 142-8666, Japan
| | - Ryuji Hamamoto
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Masaaki Komatsu
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| |
Collapse
|