1
|
Krpic T, Bilodeau M, O'Reilly MA, Masson P, Quaegebeur N. Extended Field of View Imaging Through Correlation With an Experimental Database. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2025; 72:646-655. [PMID: 40131756 DOI: 10.1109/tuffc.2025.3553784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/27/2025]
Abstract
In this article, a correlation-based (CB) ultrasound imaging technique is implemented to extend the field of view (FOV) in the inspected medium and to enhance image homogeneity. This implementation involves the acquisition, the compression, and the adaptation of a database of experimental reference signals (CB-Exp), consisting of backpropagated reflections on point-like scatterers at different positions, as an improvement over preceding implementations involving a database of numerical reference signals (CB-Num). Starting from a large database acquired in water to a database with a 99% size reduction that can be applied to tissue-like media, CB-Exp has been validated in vitro on a CIRS 040GSE phantom. When compared with the synthetic aperture focusing technique (SAFT) and CB-Num, CB-Exp results show reduced sensitivity to the probe's directivity, allowing an FOV extension from 25° with SAFT to 75° with CB-Exp. In vivo testing on a piglet's heart with CB-Exp imaging showed a 3.5-dB contrast improvement on the pericardium wall. Overall benefits of this method include a reduction in the background gCNR standard deviation (std) of 0.2 and a reduction in the std of 10 dB in the point-like targets levels, which translates to more homogeneous sensitivity in the axial and lateral directions of the image.
Collapse
|
2
|
He Q, Yang Q, Su H, Wang Y. Multi-task learning for segmentation and classification of breast tumors from ultrasound images. Comput Biol Med 2024; 173:108319. [PMID: 38513394 DOI: 10.1016/j.compbiomed.2024.108319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 03/03/2024] [Accepted: 03/12/2024] [Indexed: 03/23/2024]
Abstract
Segmentation and classification of breast tumors are critical components of breast ultrasound (BUS) computer-aided diagnosis (CAD), which significantly improves the diagnostic accuracy of breast cancer. However, the characteristics of tumor regions in BUS images, such as non-uniform intensity distributions, ambiguous or missing boundaries, and varying tumor shapes and sizes, pose significant challenges to automated segmentation and classification solutions. Many previous studies have proposed multi-task learning methods to jointly tackle tumor segmentation and classification by sharing the features extracted by the encoder. Unfortunately, this often introduces redundant or misleading information, which hinders effective feature exploitation and adversely affects performance. To address this issue, we present ACSNet, a novel multi-task learning network designed to optimize tumor segmentation and classification in BUS images. The segmentation network incorporates a novel gate unit to allow optimal transfer of valuable contextual information from the encoder to the decoder. In addition, we develop the Deformable Spatial Attention Module (DSAModule) to improve segmentation accuracy by overcoming the limitations of conventional convolution in dealing with morphological variations of tumors. In the classification branch, multi-scale feature extraction and channel attention mechanisms are integrated to discriminate between benign and malignant breast tumors. Experiments on two publicly available BUS datasets demonstrate that ACSNet not only outperforms mainstream multi-task learning methods for both breast tumor segmentation and classification tasks, but also achieves state-of-the-art results for BUS tumor segmentation. Code and models are available at https://github.com/qqhe-frank/BUS-segmentation-and-classification.git.
Collapse
Affiliation(s)
- Qiqi He
- School of Physics and Information Technology, Shaanxi Normal University, Xi'an, China; School of Life Science and Technology, Xidian University, Xi'an, China
| | - Qiuju Yang
- School of Physics and Information Technology, Shaanxi Normal University, Xi'an, China.
| | - Hang Su
- School of Physics and Information Technology, Shaanxi Normal University, Xi'an, China
| | - Yixuan Wang
- School of Physics and Information Technology, Shaanxi Normal University, Xi'an, China
| |
Collapse
|
3
|
Qi W, Wu HC, Chan SC. MDF-Net: A Multi-Scale Dynamic Fusion Network for Breast Tumor Segmentation of Ultrasound Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 32:4842-4855. [PMID: 37639409 DOI: 10.1109/tip.2023.3304518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
Breast tumor segmentation of ultrasound images provides valuable information of tumors for early detection and diagnosis. Accurate segmentation is challenging due to low image contrast between areas of interest; speckle noises, and large inter-subject variations in tumor shape and size. This paper proposes a novel Multi-scale Dynamic Fusion Network (MDF-Net) for breast ultrasound tumor segmentation. It employs a two-stage end-to-end architecture with a trunk sub-network for multiscale feature selection and a structurally optimized refinement sub-network for mitigating impairments such as noise and inter-subject variation via better feature exploration and fusion. The trunk network is extended from UNet++ with a simplified skip pathway structure to connect the features between adjacent scales. Moreover, deep supervision at all scales, instead of at the finest scale in UNet++, is proposed to extract more discriminative features and mitigate errors from speckle noise via a hybrid loss function. Unlike previous works, the first stage is linked to a loss function of the second stage so that both the preliminary segmentations and refinement subnetworks can be refined together at training. The refinement sub-network utilizes a structurally optimized MDF mechanism to integrate preliminary segmentation information (capturing general tumor shape and size) at coarse scales and explores inter-subject variation information at finer scales. Experimental results from two public datasets show that the proposed method achieves better Dice and other scores over state-of-the-art methods. Qualitative analysis also indicates that our proposed network is more robust to tumor size/shapes, speckle noise and heavy posterior shadows along tumor boundaries. An optional post-processing step is also proposed to facilitate users in mitigating segmentation artifacts. The efficiency of the proposed network is also illustrated on the "Electron Microscopy neural structures segmentation dataset". It outperforms a state-of-the-art algorithm based on UNet-2022 with simpler settings. This indicates the advantages of our MDF-Nets in other challenging image segmentation tasks with small to medium data sizes.
Collapse
|
4
|
Zhu Y, Li C, Hu K, Luo H, Zhou M, Li X, Gao X. A new two-stream network based on feature separation and complementation for ultrasound image segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
5
|
Cui W, Meng D, Lu K, Wu Y, Pan Z, Li X, Sun S. Automatic segmentation of ultrasound images using SegNet and local Nakagami distribution fitting model. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
6
|
Gao Y, Fu X, Chen Y, Guo C, Wu J. Post-pandemic healthcare for COVID-19 vaccine: Tissue-aware diagnosis of cervical lymphadenopathy via multi-modal ultrasound semantic segmentation. Appl Soft Comput 2023; 133:109947. [PMID: 36570119 PMCID: PMC9762098 DOI: 10.1016/j.asoc.2022.109947] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 11/30/2022] [Accepted: 12/15/2022] [Indexed: 12/24/2022]
Abstract
With the widespread deployment of COVID-19 vaccines all around the world, billions of people have benefited from the vaccination and thereby avoiding infection. However, huge amount of clinical cases revealed diverse side effects of COVID-19 vaccines, among which cervical lymphadenopathy is one of the most frequent local reactions. Therefore, rapid detection of cervical lymph node (LN) is essential in terms of vaccine recipients' healthcare and avoidance of misdiagnosis in the post-pandemic era. This paper focuses on a novel deep learning-based framework for the rapid diagnosis of cervical lymphadenopathy towards COVID-19 vaccine recipients. Existing deep learning-based computer-aided diagnosis (CAD) methods for cervical LN enlargement mostly only depend on single modal images, e.g., grayscale ultrasound (US), color Doppler ultrasound, and CT, while failing to effectively integrate information from the multi-source medical images. Meanwhile, both the surrounding tissue objects of the cervical LNs and different regions inside the cervical LNs may imply valuable diagnostic knowledge which is pending for mining. In this paper, we propose an Tissue-Aware Cervical Lymph Node Diagnosis method (TACLND) via multi-modal ultrasound semantic segmentation. The method effectively integrates grayscale and color Doppler US images and realizes a pixel-level localization of different tissue objects, i.e., lymph, muscle, and blood vessels. With inter-tissue and intra-tissue attention mechanisms applied, our proposed method can enhance the implicit tissue-level diagnostic knowledge in both spatial and channel dimension, and realize diagnosis of cervical LN with normal, benign or malignant state. Extensive experiments conducted on our collected cervical LN US dataset demonstrate the effectiveness of our methods on both tissue detection and cervical lymphadenopathy diagnosis. Therefore, our proposed framework can guarantee efficient diagnosis for the vaccine recipients' cervical LN, and assist doctors to discriminate between COVID-related reactive lymphadenopathy and metastatic lymphadenopathy.
Collapse
Affiliation(s)
- Yue Gao
- School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China
- Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing, 100876, China
| | - Xiangling Fu
- School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China
- Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing, 100876, China
| | - Yuepeng Chen
- School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China
- Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing, 100876, China
| | - Chenyi Guo
- Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Ji Wu
- Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| |
Collapse
|
7
|
Chen G, Dai Y, Zhang J. C-Net: Cascaded convolutional neural network with global guidance and refinement residuals for breast ultrasound images segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 225:107086. [PMID: 36044802 DOI: 10.1016/j.cmpb.2022.107086] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 08/05/2022] [Accepted: 08/23/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Breast lesions segmentation is an important step of computer-aided diagnosis system. However, speckle noise, heterogeneous structure, and similar intensity distributions bring challenges for breast lesion segmentation. METHODS In this paper, we presented a novel cascaded convolutional neural network integrating U-net, bidirectional attention guidance network (BAGNet) and refinement residual network (RFNet) for the lesion segmentation in breast ultrasound images. Specifically, we first use U-net to generate a set of saliency maps containing low-level and high-level image structures. Then, the bidirectional attention guidance network is used to capture the context between global (low-level) and local (high-level) features from the saliency map. The introduction of the global feature map can reduce the interference of surrounding tissue on the lesion regions. Furthermore, we developed a refinement residual network based on the core architecture of U-net to learn the difference between rough saliency feature maps and ground-truth masks. The learning of residuals can assist us to obtain a more complete lesion mask. RESULTS To evaluate the segmentation performance of the network, we compared with several state-of-the-art segmentation methods on the public breast ultrasound dataset (BUSIS) using six commonly used evaluation metrics. Our method achieves the highest scores on six metrics. Furthermore, p-values indicate significant differences between our method and the comparative methods. CONCLUSIONS Experimental results show that our method achieves the most competitive segmentation results. In addition, we apply the network on renal ultrasound images segmentation. In general, our method has good adaptability and robustness on ultrasound image segmentation.
Collapse
Affiliation(s)
- Gongping Chen
- College of Artificial Intelligence, Nankai University, Tianjin, China.
| | - Yu Dai
- College of Artificial Intelligence, Nankai University, Tianjin, China.
| | - Jianxun Zhang
- College of Artificial Intelligence, Nankai University, Tianjin, China
| |
Collapse
|
8
|
BUSIS: A Benchmark for Breast Ultrasound Image Segmentation. Healthcare (Basel) 2022; 10:healthcare10040729. [PMID: 35455906 PMCID: PMC9025635 DOI: 10.3390/healthcare10040729] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 04/07/2022] [Accepted: 04/08/2022] [Indexed: 02/06/2023] Open
Abstract
Breast ultrasound (BUS) image segmentation is challenging and critical for BUS computer-aided diagnosis (CAD) systems. Many BUS segmentation approaches have been studied in the last two decades, but the performances of most approaches have been assessed using relatively small private datasets with different quantitative metrics, which results in a discrepancy in performance comparison. Therefore, there is a pressing need for building a benchmark to compare existing methods using a public dataset objectively, to determine the performance of the best breast tumor segmentation algorithm available today, and to investigate what segmentation strategies are valuable in clinical practice and theoretical study. In this work, a benchmark for B-mode breast ultrasound image segmentation is presented. In the benchmark, (1) we collected 562 breast ultrasound images and proposed standardized procedures to obtain accurate annotations using four radiologists; (2) we extensively compared the performance of 16 state-of-the-art segmentation methods and demonstrated that most deep learning-based approaches achieved high dice similarity coefficient values (DSC ≥ 0.90) and outperformed conventional approaches; (3) we proposed the losses-based approach to evaluate the sensitivity of semi-automatic segmentation to user interactions; and (4) the successful segmentation strategies and possible future improvements were discussed in details.
Collapse
|
9
|
Rahali R, Dridi N, Ben Salem Y, Descombes X, Debreuve E, De Graeve F, Dahman H. Biological image segmentation using Region-Scalable Fitting Energy with B-spline level set implementation and Watershed. Ing Rech Biomed 2022. [DOI: 10.1016/j.irbm.2022.02.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
10
|
Ning Z, Zhong S, Feng Q, Chen W, Zhang Y. SMU-Net: Saliency-Guided Morphology-Aware U-Net for Breast Lesion Segmentation in Ultrasound Image. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:476-490. [PMID: 34582349 DOI: 10.1109/tmi.2021.3116087] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Deep learning methods, especially convolutional neural networks, have been successfully applied to lesion segmentation in breast ultrasound (BUS) images. However, pattern complexity and intensity similarity between the surrounding tissues (i.e., background) and lesion regions (i.e., foreground) bring challenges for lesion segmentation. Considering that such rich texture information is contained in background, very few methods have tried to explore and exploit background-salient representations for assisting foreground segmentation. Additionally, other characteristics of BUS images, i.e., 1) low-contrast appearance and blurry boundary, and 2) significant shape and position variation of lesions, also increase the difficulty in accurate lesion segmentation. In this paper, we present a saliency-guided morphology-aware U-Net (SMU-Net) for lesion segmentation in BUS images. The SMU-Net is composed of a main network with an additional middle stream and an auxiliary network. Specifically, we first propose generation of saliency maps which incorporate both low-level and high-level image structures, for foreground and background. These saliency maps are then employed to guide the main network and auxiliary network for respectively learning foreground-salient and background-salient representations. Furthermore, we devise an additional middle stream which basically consists of background-assisted fusion, shape-aware, edge-aware and position-aware units. This stream receives the coarse-to-fine representations from the main network and auxiliary network for efficiently fusing the foreground-salient and background-salient features and enhancing the ability of learning morphological information for network. Extensive experiments on five datasets demonstrate higher performance and superior robustness to the scale of dataset than several state-of-the-art deep learning approaches in breast lesion segmentation in ultrasound image.
Collapse
|
11
|
Synthetic OCT data in challenging conditions: three-dimensional OCT and presence of abnormalities. Med Biol Eng Comput 2021; 60:189-203. [PMID: 34792759 PMCID: PMC8724113 DOI: 10.1007/s11517-021-02469-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Accepted: 11/06/2021] [Indexed: 12/09/2022]
Abstract
Nowadays, retinal optical coherence tomography (OCT) plays an important role in ophthalmology and automatic analysis of the OCT is of real importance: image denoising facilitates a better diagnosis and image segmentation and classification are undeniably critical in treatment evaluation. Synthetic OCT was recently considered to provide a benchmark for quantitative comparison of automatic algorithms and to be utilized in the training stage of novel solutions based on deep learning. Due to complicated data structure in retinal OCTs, a limited number of delineated OCT datasets are already available in presence of abnormalities; furthermore, the intrinsic three-dimensional (3D) structure of OCT is ignored in many public 2D datasets. We propose a new synthetic method, applicable to 3D data and feasible in presence of abnormalities like diabetic macular edema (DME). In this method, a limited number of OCT data is used during the training step and the Active Shape Model is used to produce synthetic OCTs plus delineation of retinal boundaries and location of abnormalities. Statistical comparison of thickness maps showed that synthetic dataset can be used as a statistically acceptable representative of the original dataset (p > 0.05). Visual inspection of the synthesized vessels was also promising. Regarding the texture features of the synthesized datasets, Q-Q plots were used, and even in cases that the points have slightly digressed from the straight line, the p-values of the Kolmogorov–Smirnov test rejected the null hypothesis and showed the same distribution in texture features of the real and the synthetic data. The proposed algorithm provides a unique benchmark for comparison of OCT enhancement methods and a tailored augmentation method to overcome the limited number of OCTs in deep learning algorithms.
Collapse
|
12
|
Iqbal A, Sharif M. MDA-Net: Multiscale dual attention-based network for breast lesion segmentation using ultrasound images. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2021. [DOI: 10.1016/j.jksuci.2021.10.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
13
|
An FP, Liu JE, Wang JR. Medical image segmentation algorithm based on positive scaling invariant-self encoding CCA. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102395] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
14
|
Chel H, Bora PK, Ramchiary KK. A fast technique for hyper-echoic region separation from brain ultrasound images using patch based thresholding and cubic B-spline based contour smoothing. ULTRASONICS 2021; 111:106304. [PMID: 33360770 DOI: 10.1016/j.ultras.2020.106304] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Revised: 11/14/2020] [Accepted: 11/14/2020] [Indexed: 06/12/2023]
Abstract
Ultrasound image guided brain surgery (UGBS) requires an automatic and fast image segmentation method. The level-set and active contour based algorithms have been found to be useful for obtaining topology-independent boundaries between different image regions. But slow convergence limits their use in online US image segmentation. The performance of these algorithms deteriorates on US images because of the intensity inhomogeneity. This paper proposes an effective region-driven method for the segmentation of hyper-echoic (HE) regions suppressing the hypo-echoic and anechoic regions in brain US images. An automatic threshold estimation scheme is developed with a modified Niblack's approach. The separation of the hyper-echoic and non-hyper-echoic (NHE) regions is performed by successively applying patch based intensity thresholding and boundary smoothing. First, a patch based segmentation is performed, which separates roughly the two regions. The patch based approach in this process reduces the effect of intensity heterogeneity within an HE region. An iterative boundary correction step with reducing patch size improves further the regional topology and refines the boundary regions. For avoiding the slope and curvature discontinuities and obtaining distinct boundaries between HE and NHE regions, a cubic B-spline model of curve smoothing is applied. The proposed method is 50-100 times faster than the other level-set based image segmentation algorithms. The segmentation performance and the convergence speed of the proposed method are compared with four other competing level-set based algorithms. The computational results show that the proposed segmentation approach outperforms other level-set based techniques both subjectively and objectively.
Collapse
Affiliation(s)
- Haradhan Chel
- Department of Electronics and Communication, Central Institute of Technology Kokrajhar, Assam 783370, India; City Clinic and Research Centre, Kokrajhar, Assam, India.
| | - P K Bora
- Department of EEE, Indian Institute of Technology Guwahati, Assam, India.
| | - K K Ramchiary
- City Clinic and Research Centre, Kokrajhar, Assam, India.
| |
Collapse
|
15
|
Xue C, Zhu L, Fu H, Hu X, Li X, Zhang H, Heng PA. Global guidance network for breast lesion segmentation in ultrasound images. Med Image Anal 2021; 70:101989. [PMID: 33640719 DOI: 10.1016/j.media.2021.101989] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 01/28/2021] [Accepted: 01/29/2021] [Indexed: 12/01/2022]
Abstract
Automatic breast lesion segmentation in ultrasound helps to diagnose breast cancer, which is one of the dreadful diseases that affect women globally. Segmenting breast regions accurately from ultrasound image is a challenging task due to the inherent speckle artifacts, blurry breast lesion boundaries, and inhomogeneous intensity distributions inside the breast lesion regions. Recently, convolutional neural networks (CNNs) have demonstrated remarkable results in medical image segmentation tasks. However, the convolutional operations in a CNN often focus on local regions, which suffer from limited capabilities in capturing long-range dependencies of the input ultrasound image, resulting in degraded breast lesion segmentation accuracy. In this paper, we develop a deep convolutional neural network equipped with a global guidance block (GGB) and breast lesion boundary detection (BD) modules for boosting the breast ultrasound lesion segmentation. The GGB utilizes the multi-layer integrated feature map as a guidance information to learn the long-range non-local dependencies from both spatial and channel domains. The BD modules learn additional breast lesion boundary map to enhance the boundary quality of a segmentation result refinement. Experimental results on a public dataset and a collected dataset show that our network outperforms other medical image segmentation methods and the recent semantic segmentation methods on breast ultrasound lesion segmentation. Moreover, we also show the application of our network on the ultrasound prostate segmentation, in which our method better identifies prostate regions than state-of-the-art networks.
Collapse
Affiliation(s)
- Cheng Xue
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Lei Zhu
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Hong Kong, China.
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence, Abu Dhabi, UAE
| | - Xiaowei Hu
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Hai Zhang
- Shenzhen People's Hospital, The Second Clinical College of Jinan University, The First Affiliated Hospital of Southern University of Science and Technology, Guangdong Province, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong. Shenzhen Key Laboratory of Virtual Reality and Human Interaction Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
| |
Collapse
|
16
|
Ahmed S, Kamal U, Hasan MK. DSWE-Net: A deep learning approach for shear wave elastography and lesion segmentation using single push acoustic radiation force. ULTRASONICS 2021; 110:106283. [PMID: 33166787 DOI: 10.1016/j.ultras.2020.106283] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2019] [Revised: 10/02/2020] [Accepted: 10/10/2020] [Indexed: 06/11/2023]
Abstract
Ultrasound-based non-invasive elasticity imaging modalities have received significant consideration for tissue characterization over the last few years. Though substantial advances have been made, the conventional Shear Wave Elastography (SWE) methods still suffer from poor image quality in regions far from the push location, particularly those which rely on single focused ultrasound push beam to generate shear waves. In this study, we propose DSWE-Net, a novel deep learning-based approach that is able to construct Young's modulus maps from ultrasonically tracked tissue velocity data resulting from a single acoustic radiation force (ARF) push. The proposed network employs a 3D convolutional encoder, followed by a recurrent block consisting of several Convolutional Long Short-Term Memory (ConvLSTM) layers to extract high-level spatio-temporal features from different time-frames of the input velocity data. Finally, a pair of coupled 2D convolutional decoder blocks reconstructs the modulus image and additionally performs inclusion segmentation by generating a binary mask. We also propose a multi-task learning loss function for end-to-end training of the network with 1260 data samples obtained from a simulation environment which include both bi-level and multi-level phantom structures. The performance of the proposed network is evaluated on 140 synthetic test data and the results are compared both qualitatively and quantitatively with that of the current state of the art method, Local Phase Velocity Based Imaging (LPVI). With an average SSIM of 0.90, RMSE of 0.10 and 20.69 dB PSNR, DSWE-Net performs much better on the imaging task compared to LPVI. Our method also achieves an average IoU score of 0.81 for the segmentation task which makes it suitable for localizing inclusions as well. In this initial study, we also show that our method gains an overall improvement of 0.09 in SSIM, 4.81 dB in PSNR, 2.02 dB in CNR, and 0.09 in RMSE over LPVI on a completely unseen set of CIRS tissue mimicking phantom data. This proves its better generalization capability and shows its potential for use in real-world clinical practice.
Collapse
Affiliation(s)
- Shahed Ahmed
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1205, Bangladesh
| | - Uday Kamal
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1205, Bangladesh
| | - Md Kamrul Hasan
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1205, Bangladesh.
| |
Collapse
|
17
|
Vakanski A, Xian M, Freer PE. Attention-Enriched Deep Learning Model for Breast Tumor Segmentation in Ultrasound Images. ULTRASOUND IN MEDICINE & BIOLOGY 2020; 46:2819-2833. [PMID: 32709519 PMCID: PMC7483681 DOI: 10.1016/j.ultrasmedbio.2020.06.015] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 06/12/2020] [Accepted: 06/19/2020] [Indexed: 05/07/2023]
Abstract
Incorporating human domain knowledge for breast tumor diagnosis is challenging because shape, boundary, curvature, intensity or other common medical priors vary significantly across patients and cannot be employed. This work proposes a new approach to integrating visual saliency into a deep learning model for breast tumor segmentation in ultrasound images. Visual saliency refers to image maps containing regions that are more likely to attract radiologists' visual attention. The proposed approach introduces attention blocks into a U-Net architecture and learns feature representations that prioritize spatial regions with high saliency levels. The validation results indicate increased accuracy for tumor segmentation relative to models without salient attention layers. The approach achieved a Dice similarity coefficient (DSC) of 90.5% on a data set of 510 images. The salient attention model has the potential to enhance accuracy and robustness in processing medical images of other organs, by providing a means to incorporate task-specific knowledge into deep learning architectures.
Collapse
Affiliation(s)
- Aleksandar Vakanski
- Department of Computer Science, University of Idaho, Idaho Falls, Idaho, USA.
| | - Min Xian
- Department of Computer Science, University of Idaho, Idaho Falls, Idaho, USA
| | - Phoebe E Freer
- University of Utah School of Medicine, Salt Lake City, Utah, USA
| |
Collapse
|
18
|
Wang X, Zhai Y, Liu X, Zhu W, Gao J. Level-Set Method for Image Analysis of Schlemm's Canal and Trabecular Meshwork. Transl Vis Sci Technol 2020; 9:7. [PMID: 32953247 PMCID: PMC7476667 DOI: 10.1167/tvst.9.10.7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Accepted: 07/19/2020] [Indexed: 12/17/2022] Open
Abstract
Purpose To evaluate different segmentation methods in analyzing Schlemm's canal (SC) and the trabecular meshwork (TM) in ultrasound biomicroscopy (UBM) images. Methods Twenty-six healthy volunteers were recruited. The intraocular pressure (IOP) was measured while study subjects blew a trumpet. Images were obtained at different IOPs by 50-MHz UBM. ImageJ software and three segmentation methods—K-means, fuzzy C-means, and level set—were applied to segment the UBM images. The quantitative analysis of the TM-SC region was based on the segmentation results. The relative error and the interclass correlation coefficient (ICC) were used to quantify the accuracy and the repeatability of measurements. Pearson correlation analysis was conducted to evaluate the associations between the IOP and the TM and SC geometric measurements. Results A total of 104 UBM images were obtained. Among them, 84 were adequately clear to be segmented. The level-set method results had a higher similarity to ImageJ results than the other two methods. The ICC values of the level-set method were 0.97, 0.95, 0.9, and 0.57, respectively. Pearson correlation coefficients for the IOP to the SC area, SC perimeter, SC length, and TM width were −0.91, −0.72, −0.66, and −0.61 (P < 0.0001), respectively. Conclusions The level-set method showed better accuracy than the other two methods. Compared with manual methods, it can achieve similar precision, better repeatability, and greater efficiency. Therefore, the level-set method can be used for reliable UBM image segmentation. Translational Relevance The level-set method can be used to analyze TM and SC region in UBM images semiautomatically.
Collapse
Affiliation(s)
- Xin Wang
- Department of Ophthalmology, Liaocheng People's Hospital, Cheeloo College of Medicine, Shandong University, Liaocheng, Shandong, China.,Department of Ophthalmology, Liaocheng People's Hospital, Liaocheng, Shandong, China
| | - Yuxi Zhai
- Department of Ophthalmology, Liaocheng People's Hospital, Liaocheng, Shandong, China
| | - Xueyan Liu
- Department of Mathematics, Liaocheng University, Liaocheng, Shandong, China
| | - Wei Zhu
- Department of Pharmacology, Qingdao University School of Pharmacy, Qingdao, Shandong, China.,Qingdao Haier Biotech Co. Ltd, Qingdao, Shandong, China
| | - Jianlu Gao
- Department of Ophthalmology, Liaocheng People's Hospital, Cheeloo College of Medicine, Shandong University, Liaocheng, Shandong, China.,Department of Ophthalmology, Liaocheng People's Hospital, Liaocheng, Shandong, China
| |
Collapse
|
19
|
Lei B, Huang S, Li H, Li R, Bian C, Chou YH, Qin J, Zhou P, Gong X, Cheng JZ. Self-co-attention neural network for anatomy segmentation in whole breast ultrasound. Med Image Anal 2020; 64:101753. [DOI: 10.1016/j.media.2020.101753] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 05/27/2020] [Accepted: 06/06/2020] [Indexed: 11/25/2022]
|
20
|
Segmentation of breast ultrasound image with semantic classification of superpixels. Med Image Anal 2020; 61:101657. [PMID: 32032899 DOI: 10.1016/j.media.2020.101657] [Citation(s) in RCA: 83] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Revised: 01/18/2020] [Accepted: 01/22/2020] [Indexed: 11/22/2022]
Abstract
Breast cancer is a great threat to females. Ultrasound imaging has been applied extensively in diagnosis of breast cancer. Due to the poor image quality, segmentation of breast ultrasound (BUS) image remains a very challenging task. Besides, BUS image segmentation is a crucial step for further analysis. In this paper, we proposed a novel method to segment the breast tumor via semantic classification and merging patches. The proposed method firstly selects two diagonal points to crop a region of interest (ROI) on the original image. Then, histogram equalization, bilateral filter and pyramid mean shift filter are adopted to enhance the image. The cropped image is divided into many superpixels using simple linear iterative clustering (SLIC). Furthermore, some features are extracted from the superpixels and a bag-of-words model can be created. The initial classification can be obtained by a back propagation neural network (BPNN). To refine preliminary result, k-nearest neighbor (KNN) is used for reclassification and the final result is achieved. To verify the proposed method, we collected a BUS dataset containing 320 cases. The segmentation results of our method have been compared with the corresponding results obtained by five existing approaches. The experimental results show that our method achieved competitive results compared to conventional methods in terms of TP and FP, and produced good approximations to the hand-labelled tumor contours with comprehensive consideration of all metrics (the F1-score = 89.87% ± 4.05%, and the average radial error = 9.95% ± 4.42%).
Collapse
|
21
|
Na J, Park S, Bak JH, Kim M, Lee D, Yoo Y, Kim I, Park J, Lee U, Lee JM. Bayesian Inference of Aqueous Mineral Carbonation Kinetics for Carbon Capture and Utilization. Ind Eng Chem Res 2019. [DOI: 10.1021/acs.iecr.9b01062] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Jonggeol Na
- Clean Energy Research Center, Korea Institute of Science and Technology (KIST), 5 Hwarang-ro 14-gil, Seongbuk-gu, Seoul 02792, Republic of Korea
| | - Seongeon Park
- School of Chemical and Biological Engineering, Seoul National University, Gwanak-ro 1, Gwanak-gu, Seoul 08826, Republic of Korea
| | - Ji Hyun Bak
- School of Computational Sciences, Korea Institute for Advanced Study (KIAS), 85 Hoegi-ro, Dongdaemun-gu, Seoul 02455, Republic of Korea
| | - Minjun Kim
- School of Chemical and Biological Engineering, Seoul National University, Gwanak-ro 1, Gwanak-gu, Seoul 08826, Republic of Korea
| | - Dongwoo Lee
- School of Chemical and Biological Engineering, Seoul National University, Gwanak-ro 1, Gwanak-gu, Seoul 08826, Republic of Korea
| | - Yunsung Yoo
- Department of Chemical and Biomolecular Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Injun Kim
- Department of Chemical and Biomolecular Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Jinwon Park
- Department of Chemical and Biomolecular Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Ung Lee
- Clean Energy Research Center, Korea Institute of Science and Technology (KIST), 5 Hwarang-ro 14-gil, Seongbuk-gu, Seoul 02792, Republic of Korea
| | - Jong Min Lee
- School of Chemical and Biological Engineering, Seoul National University, Gwanak-ro 1, Gwanak-gu, Seoul 08826, Republic of Korea
| |
Collapse
|
22
|
Lei B, Huang S, Li R, Bian C, Li H, Chou YH, Cheng JZ. Segmentation of breast anatomy for automated whole breast ultrasound images with boundary regularized convolutional encoder–decoder network. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.09.043] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
23
|
Hu Y, Guo Y, Wang Y, Yu J, Li J, Zhou S, Chang C. Automatic tumor segmentation in breast ultrasound images using a dilated fully convolutional network combined with an active contour model. Med Phys 2018; 46:215-228. [PMID: 30374980 DOI: 10.1002/mp.13268] [Citation(s) in RCA: 86] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2018] [Revised: 09/30/2018] [Accepted: 10/16/2018] [Indexed: 01/19/2023] Open
Abstract
PURPOSE Due to the low contrast, blurry boundaries, and large amount of shadows in breast ultrasound (BUS) images, automatic tumor segmentation remains a challenging task. Deep learning provides a solution to this problem, since it can effectively extract representative features from lesions and the background in BUS images. METHODS A novel automatic tumor segmentation method is proposed by combining a dilated fully convolutional network (DFCN) with a phase-based active contour (PBAC) model. The DFCN is an improved fully convolutional neural network with dilated convolution in deeper layers, fewer parameters, and batch normalization techniques; and has a large receptive field that can separate tumors from background. The predictions made by the DFCN are relatively rough due to blurry boundaries and variations in tumor sizes; thus, the PBAC model, which adds both region-based and phase-based energy functions, is applied to further improve segmentation results. The DFCN model is trained and tested in dataset 1 which contains 570 BUS images from 89 patients. In dataset 2, a 10-fold support vector machine (SVM) classifier is employed to verify the diagnostic ability using 460 features extracted from the segmentation results of the proposed method. RESULTS Advantages of the present method were compared with three state-of-the-art networks; the FCN-8s, U-net, and dilated residual network (DRN). Experimental results from 170 BUS images show that the proposed method had a Dice Similarity coefficient of 88.97 ± 10.01%, a Hausdorff distance (HD) of 35.54 ± 29.70 pixels, and a mean absolute deviation (MAD) of 7.67 ± 6.67 pixels, which showed the best segmentation performance. In dataset 2, the area under curve (AUC) of the 10-fold SVM classifier was 0.795 which is similar to the classification using the manual segmentation results. CONCLUSIONS The proposed automatic method may be sufficiently accurate, robust, and efficient for medical ultrasound applications.
Collapse
Affiliation(s)
- Yuzhou Hu
- Departmentof Electronic Engineering, Fudan University, Shanghai, 200433, China
| | - Yi Guo
- Departmentof Electronic Engineering, Fudan University, Shanghai, 200433, China.,Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai, 200433, China
| | - Yuanyuan Wang
- Departmentof Electronic Engineering, Fudan University, Shanghai, 200433, China.,Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai, 200433, China
| | - Jinhua Yu
- Departmentof Electronic Engineering, Fudan University, Shanghai, 200433, China.,Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai, 200433, China
| | - Jiawei Li
- Department of Ultrasound, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
| | - Shichong Zhou
- Department of Ultrasound, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
| | - Cai Chang
- Department of Ultrasound, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
| |
Collapse
|
24
|
Li Y, Ho CP, Toulemonde M, Chahal N, Senior R, Tang MX. Fully Automatic Myocardial Segmentation of Contrast Echocardiography Sequence Using Random Forests Guided by Shape Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1081-1091. [PMID: 28961106 DOI: 10.1109/tmi.2017.2747081] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Myocardial contrast echocardiography (MCE) is an imaging technique that assesses left ventricle function and myocardial perfusion for the detection of coronary artery diseases. Automatic MCE perfusion quantification is challenging and requires accurate segmentation of the myocardium from noisy and time-varying images. Random forests (RF) have been successfully applied to many medical image segmentation tasks. However, the pixel-wise RF classifier ignores contextual relationships between label outputs of individual pixels. RF which only utilizes local appearance features is also susceptible to data suffering from large intensity variations. In this paper, we demonstrate how to overcome the above limitations of classic RF by presenting a fully automatic segmentation pipeline for myocardial segmentation in full-cycle 2-D MCE data. Specifically, a statistical shape model is used to provide shape prior information that guide the RF segmentation in two ways. First, a novel shape model (SM) feature is incorporated into the RF framework to generate a more accurate RF probability map. Second, the shape model is fitted to the RF probability map to refine and constrain the final segmentation to plausible myocardial shapes. We further improve the performance by introducing a bounding box detection algorithm as a preprocessing step in the segmentation pipeline. Our approach on 2-D image is further extended to 2-D+t sequences which ensures temporal consistency in the final sequence segmentations. When evaluated on clinical MCE data sets, our proposed method achieves notable improvement in segmentation accuracy and outperforms other state-of-the-art methods, including the classic RF and its variants, active shape model and image registration.
Collapse
|
25
|
Dormer JD, Guo R, Shen M, Jiang R, Wagner MB, Fei B. Ultrasound Segmentation of Rat Hearts Using Convolution Neural Networks. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10580:105801A. [PMID: 30197465 PMCID: PMC6126353 DOI: 10.1117/12.2293558] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Ultrasound is widely used for diagnosing cardiovascular diseases. However, estimates such as left ventricle volume currently require manual segmentation, which can be time consuming. In addition, cardiac ultrasound is often complicated by imaging artifacts such as shadowing and mirror images, making it difficult for simple intensity-based automated segmentation methods. In this work, we use convolutional neural networks (CNNs) to segment ultrasound images of rat hearts embedded in agar phantoms into four classes: background, myocardium, left ventricle cavity, and right ventricle cavity. We also explore how the inclusion of a single diseased heart changes the results in a small dataset. We found an average overall segmentation accuracy of 70.0% ± 7.3% when combining the healthy and diseased data, compared to 72.4% ± 6.6% for just the healthy hearts. This work suggests that including diseased hearts with healthy hearts in training data could improve segmentation results, while testing a diseased heart with a model trained on healthy hearts can produce accurate segmentation results for some classes but not others. More data are needed in order to improve the accuracy of the CNN based segmentation.
Collapse
Affiliation(s)
- James D. Dormer
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Ming Shen
- Department of Pediatrics, Emory University, Atlanta, GA
| | - Rong Jiang
- Department of Pediatrics, Emory University, Atlanta, GA
| | | | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA
- Winship Cancer Institute of Emory University, Atlanta, GA
| |
Collapse
|
26
|
Meiburger KM, Acharya UR, Molinari F. Automated localization and segmentation techniques for B-mode ultrasound images: A review. Comput Biol Med 2017; 92:210-235. [PMID: 29247890 DOI: 10.1016/j.compbiomed.2017.11.018] [Citation(s) in RCA: 66] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 11/30/2017] [Accepted: 11/30/2017] [Indexed: 12/14/2022]
Abstract
B-mode ultrasound imaging is used extensively in medicine. Hence, there is a need to have efficient segmentation tools to aid in computer-aided diagnosis, image-guided interventions, and therapy. This paper presents a comprehensive review on automated localization and segmentation techniques for B-mode ultrasound images. The paper first describes the general characteristics of B-mode ultrasound images. Then insight on the localization and segmentation of tissues is provided, both in the case in which the organ/tissue localization provides the final segmentation and in the case in which a two-step segmentation process is needed, due to the desired boundaries being too fine to locate from within the entire ultrasound frame. Subsequenly, examples of some main techniques found in literature are shown, including but not limited to shape priors, superpixel and classification, local pixel statistics, active contours, edge-tracking, dynamic programming, and data mining. Ten selected applications (abdomen/kidney, breast, cardiology, thyroid, liver, vascular, musculoskeletal, obstetrics, gynecology, prostate) are then investigated in depth, and the performances of a few specific applications are compared. In conclusion, future perspectives for B-mode based segmentation, such as the integration of RF information, the employment of higher frequency probes when possible, the focus on completely automatic algorithms, and the increase in available data are discussed.
Collapse
Affiliation(s)
- Kristen M Meiburger
- Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy
| | - U Rajendra Acharya
- Department of Electronic & Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore; Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - Filippo Molinari
- Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy.
| |
Collapse
|
27
|
Faisal A, Ng SC, Goh SL, Lai KW. Knee cartilage segmentation and thickness computation from ultrasound images. Med Biol Eng Comput 2017; 56:657-669. [PMID: 28849317 DOI: 10.1007/s11517-017-1710-2] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2017] [Accepted: 08/09/2017] [Indexed: 11/27/2022]
Abstract
Quantitative thickness computation of knee cartilage in ultrasound images requires segmentation of a monotonous hypoechoic band between the soft tissue-cartilage interface and the cartilage-bone interface. Speckle noise and intensity bias captured in the ultrasound images often complicates the segmentation task. This paper presents knee cartilage segmentation using locally statistical level set method (LSLSM) and thickness computation using normal distance. Comparison on several level set methods in the attempt of segmenting the knee cartilage shows that LSLSM yields a more satisfactory result. When LSLSM was applied to 80 datasets, the qualitative segmentation assessment indicates a substantial agreement with Cohen's κ coefficient of 0.73. The quantitative validation metrics of Dice similarity coefficient and Hausdorff distance have average values of 0.91 ± 0.01 and 6.21 ± 0.59 pixels, respectively. These satisfactory segmentation results are making the true thickness between two interfaces of the cartilage possible to be computed based on the segmented images. The measured cartilage thickness ranged from 1.35 to 2.42 mm with an average value of 1.97 ± 0.11 mm, reflecting the robustness of the segmentation algorithm to various cartilage thickness. These results indicate a potential application of the methods described for assessment of cartilage degeneration where changes in the cartilage thickness can be quantified over time by comparing the true thickness at a certain time interval.
Collapse
Affiliation(s)
- Amir Faisal
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, 50603, Kuala Lumpur, Malaysia
| | - Siew-Cheok Ng
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, 50603, Kuala Lumpur, Malaysia
| | - Siew-Li Goh
- Faculty of Medicine, University of Malaya, 50603, Kuala Lumpur, Malaysia
| | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, 50603, Kuala Lumpur, Malaysia.
| |
Collapse
|
28
|
Abstract
In this paper, region-difference filters for the segmentation of liver ultrasound (US) images are proposed. Region-difference filters evaluate maximum difference of the average of two regions of the window around the center pixel. Implementing the filters on the whole image gives region-difference image. This image is then converted into binary image and morphologically operated for segmenting the desired lesion from the ultrasound image. The proposed method is compared with the maximum a posteriori-Markov random field (MAP-MRF), Chan-Vese active contour method (CV-ACM), and active contour region-scalable fitting energy (RSFE) methods. MATLAB code available online for the RSFE method is used for comparison whereas MAP-MRF and CV-ACM methods are coded in MATLAB by authors. Since no comparison is available on common database for the performance of the three methods, therefore, performance comparison of the three methods and proposed method was done on liver US images obtained from PGIMER, Chandigarh, India and from online resource. A radiologist blindly analyzed segmentation results of the 4 methods implemented on 56 images and had selected the segmentation result obtained from the proposed method as best for 46 test US images. For the remaining 10 US images, the proposed method performance was very near to the other three segmentation methods. The proposed segmentation method obtained the overall accuracy of 99.32% in comparison to the overall accuracy of 85.9, 98.71, and 68.21% obtained by MAP-MRF, CV-ACM, and RSFE methods, respectively. Computational time taken by the proposed method is 5.05 s compared to the time of 26.44, 24.82, and 28.36 s taken by MAP-MRF, CV-ACM, and RSFE methods, respectively.
Collapse
Affiliation(s)
- Nishant Jain
- Biomedical Laboratory, Department of Electrical Engineering, Indian Institute of Technology Roorkee, Roorkee, 247667 India
| | - Vinod Kumar
- Biomedical Laboratory, Department of Electrical Engineering, Indian Institute of Technology Roorkee, Roorkee, 247667 India
| |
Collapse
|
29
|
A Novel Segmentation Approach Combining Region- and Edge-Based Information for Ultrasound Images. BIOMED RESEARCH INTERNATIONAL 2017; 2017:9157341. [PMID: 28536703 PMCID: PMC5426079 DOI: 10.1155/2017/9157341] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2016] [Revised: 01/21/2017] [Accepted: 03/14/2017] [Indexed: 11/17/2022]
Abstract
Ultrasound imaging has become one of the most popular medical imaging modalities with numerous diagnostic applications. However, ultrasound (US) image segmentation, which is the essential process for further analysis, is a challenging task due to the poor image quality. In this paper, we propose a new segmentation scheme to combine both region- and edge-based information into the robust graph-based (RGB) segmentation method. The only interaction required is to select two diagonal points to determine a region of interest (ROI) on the original image. The ROI image is smoothed by a bilateral filter and then contrast-enhanced by histogram equalization. Then, the enhanced image is filtered by pyramid mean shift to improve homogeneity. With the optimization of particle swarm optimization (PSO) algorithm, the RGB segmentation method is performed to segment the filtered image. The segmentation results of our method have been compared with the corresponding results obtained by three existing approaches, and four metrics have been used to measure the segmentation performance. The experimental results show that the method achieves the best overall performance and gets the lowest ARE (10.77%), the second highest TPVF (85.34%), and the second lowest FPVF (4.48%).
Collapse
|
30
|
Breast ultrasound image segmentation: a survey. Int J Comput Assist Radiol Surg 2017; 12:493-507. [DOI: 10.1007/s11548-016-1513-1] [Citation(s) in RCA: 69] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 12/15/2016] [Indexed: 10/20/2022]
|
31
|
|
32
|
Accurate lumen diameter measurement in curved vessels in carotid ultrasound: an iterative scale-space and spatial transformation approach. Med Biol Eng Comput 2016; 55:1415-1434. [PMID: 27943087 DOI: 10.1007/s11517-016-1601-y] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2016] [Accepted: 11/28/2016] [Indexed: 10/20/2022]
Abstract
Monitoring of cerebrovascular diseases via carotid ultrasound has started to become a routine. The measurement of image-based lumen diameter (LD) or inter-adventitial diameter (IAD) is a promising approach for quantification of the degree of stenosis. The manual measurements of LD/IAD are not reliable, subjective and slow. The curvature associated with the vessels along with non-uniformity in the plaque growth poses further challenges. This study uses a novel and generalized approach for automated LD and IAD measurement based on a combination of spatial transformation and scale-space. In this iterative procedure, the scale-space is first used to get the lumen axis which is then used with spatial image transformation paradigm to get a transformed image. The scale-space is then reapplied to retrieve the lumen region and boundary in the transformed framework. Then, inverse transformation is applied to display the results in original image framework. Two hundred and two patients' left and right common carotid artery (404 carotid images) B-mode ultrasound images were retrospectively analyzed. The validation of our algorithm has done against the two manual expert tracings. The coefficient of correlation between the two manual tracings for LD was 0.98 (p < 0.0001) and 0.99 (p < 0.0001), respectively. The precision of merit between the manual expert tracings and the automated system was 97.7 and 98.7%, respectively. The experimental analysis demonstrated superior performance of the proposed method over conventional approaches. Several statistical tests demonstrated the stability and reliability of the automated system.
Collapse
|
33
|
Zhang Q, Xiao Y, Dai W, Suo J, Wang C, Shi J, Zheng H. Deep learning based classification of breast tumors with shear-wave elastography. ULTRASONICS 2016; 72:150-7. [PMID: 27529139 DOI: 10.1016/j.ultras.2016.08.004] [Citation(s) in RCA: 121] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2015] [Revised: 06/30/2016] [Accepted: 08/05/2016] [Indexed: 05/03/2023]
Abstract
This study aims to build a deep learning (DL) architecture for automated extraction of learned-from-data image features from the shear-wave elastography (SWE), and to evaluate the DL architecture in differentiation between benign and malignant breast tumors. We construct a two-layer DL architecture for SWE feature extraction, comprised of the point-wise gated Boltzmann machine (PGBM) and the restricted Boltzmann machine (RBM). The PGBM contains task-relevant and task-irrelevant hidden units, and the task-relevant units are connected to the RBM. Experimental evaluation was performed with five-fold cross validation on a set of 227 SWE images, 135 of benign tumors and 92 of malignant tumors, from 121 patients. The features learned with our DL architecture were compared with the statistical features quantifying image intensity and texture. Results showed that the DL features achieved better classification performance with an accuracy of 93.4%, a sensitivity of 88.6%, a specificity of 97.1%, and an area under the receiver operating characteristic curve of 0.947. The DL-based method integrates feature learning with feature selection on SWE. It may be potentially used in clinical computer-aided diagnosis of breast cancer.
Collapse
Affiliation(s)
- Qi Zhang
- School of Communication and Information Engineering, Shanghai University, Shanghai, China.
| | - Yang Xiao
- Paul C. Lauterbur Research Center for Biomedical Imaging, Institute of Biomedical and Health Engineering, Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Wei Dai
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Jingfeng Suo
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Congzhi Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Institute of Biomedical and Health Engineering, Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Jun Shi
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Institute of Biomedical and Health Engineering, Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
| |
Collapse
|
34
|
IFCM Based Segmentation Method for Liver Ultrasound Images. J Med Syst 2016; 40:249. [DOI: 10.1007/s10916-016-0623-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2016] [Accepted: 09/21/2016] [Indexed: 01/04/2023]
|
35
|
Araújo T, Abayazid M, Rutten MJCM, Misra S. Segmentation and three-dimensional reconstruction of lesions using the automated breast volume scanner (ABVS). Int J Med Robot 2016; 13. [DOI: 10.1002/rcs.1767] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2016] [Revised: 07/11/2016] [Accepted: 07/12/2016] [Indexed: 01/06/2023]
Affiliation(s)
- Teresa Araújo
- Department of Biomechanical Engineering; University of Twente; P. O. Box 217 7500 AE Enschede Overijsel Netherlands
- Faculty of Engineering of University of Porto; Rua Dr. Roberto Frias 4200-465 Porto Portugal
| | - Momen Abayazid
- Department of Biomechanical Engineering; University of Twente; P. O. Box 217 7500 AE Enschede Overijsel Netherlands
- Department of Radiology; Brigham and Women's Hospital and Harvard Medical School; 75 Francis Street Boston MA 02119 USA
| | - Matthieu J. C. M. Rutten
- Department of Radiology; Jeroen Bosch Hospital; Nieuwstraat 34 5211 NL's-Hertogenbosch The Netherlands
| | - Sarthak Misra
- Department of Biomechanical Engineering; University of Twente; P. O. Box 217 7500 AE Enschede Overijsel Netherlands
- Department of Biomedical Engineering; University of Groningen and University Medical Centre Groningen; Antonius Deusinglaan 1 9713 AV Groningen The Netherlands
| |
Collapse
|
36
|
Kirimasthong K, Rodtook A, Chaumrattanakul U, Makhanov SS. Phase portrait analysis for automatic initialization of multiple snakes for segmentation of the ultrasound images of breast cancer. Pattern Anal Appl 2016. [DOI: 10.1007/s10044-016-0556-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
37
|
Sparks R, Bloch BN, Feleppa E, Barratt D, Moses D, Ponsky L, Madabhushi A. Multiattribute probabilistic prostate elastic registration (MAPPER): application to fusion of ultrasound and magnetic resonance imaging. Med Phys 2016; 42:1153-63. [PMID: 25735270 DOI: 10.1118/1.4905104] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Transrectal ultrasound (TRUS)-guided needle biopsy is the current gold standard for prostate cancer diagnosis. However, up to 40% of prostate cancer lesions appears isoechoic on TRUS. Hence, TRUS-guided biopsy has a high false negative rate for prostate cancer diagnosis. Magnetic resonance imaging (MRI) is better able to distinguish prostate cancer from benign tissue. However, MRI-guided biopsy requires special equipment and training and a longer procedure time. MRI-TRUS fusion, where MRI is acquired preoperatively and then aligned to TRUS, allows for advantages of both modalities to be leveraged during biopsy. MRI-TRUS-guided biopsy increases the yield of cancer positive biopsies. In this work, the authors present multiattribute probabilistic postate elastic registration (MAPPER) to align prostate MRI and TRUS imagery. METHODS MAPPER involves (1) segmenting the prostate on MRI, (2) calculating a multiattribute probabilistic map of prostate location on TRUS, and (3) maximizing overlap between the prostate segmentation on MRI and the multiattribute probabilistic map on TRUS, thereby driving registration of MRI onto TRUS. MAPPER represents a significant advancement over the current state-of-the-art as it requires no user interaction during the biopsy procedure by leveraging texture and spatial information to determine the prostate location on TRUS. Although MAPPER requires manual interaction to segment the prostate on MRI, this step is performed prior to biopsy and will not substantially increase biopsy procedure time. RESULTS MAPPER was evaluated on 13 patient studies from two independent datasets—Dataset 1 has 6 studies acquired with a side-firing TRUS probe and a 1.5 T pelvic phased-array coil MRI; Dataset 2 has 7 studies acquired with a volumetric end-firing TRUS probe and a 3.0 T endorectal coil MRI. MAPPER has a root-mean-square error (RMSE) for expert selected fiducials of 3.36 ± 1.10 mm for Dataset 1 and 3.14 ± 0.75 mm for Dataset 2. State-of-the-art MRI-TRUS fusion methods report RMSE of 3.06-2.07 mm. CONCLUSIONS MAPPER aligns MRI and TRUS imagery without manual intervention ensuring efficient, reproducible registration. MAPPER has a similar RMSE to state-of-the-art methods that require manual intervention.
Collapse
Affiliation(s)
- Rachel Sparks
- Centre for Medical Image Computing, University College London, London WC1E 6BT, United Kingdom
| | - B Nicolas Bloch
- Department of Radiology, Boston Medical Center and Boston University, Boston, Massachusetts 02118
| | - Ernest Feleppa
- Lizzi Center for Biomedical Engineering, Riverside Research Institute, New York, New York 10038
| | - Dean Barratt
- Centre for Medical Image Computing, University College London, London WC1E 6BT, United Kingdom
| | - Daniel Moses
- South Western Sydney Clinical School, University of New South Wales, Sydney NSW 2052, Australia
| | - Lee Ponsky
- Department of Urology, University Hospitals Case Medical Center, Cleveland, Ohio 44106
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio 44106
| |
Collapse
|
38
|
Pons G, Martí J, Martí R, Ganau S, Noble JA. Breast-lesion Segmentation Combining B-Mode and Elastography Ultrasound. ULTRASONIC IMAGING 2016; 38:209-224. [PMID: 26062760 DOI: 10.1177/0161734615589287] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Breast ultrasound (BUS) imaging has become a crucial modality, especially for providing a complementary view when other modalities (i.e., mammography) are not conclusive in the task of assessing lesions. The specificity in cancer detection using BUS imaging is low. These false-positive findings often lead to an increase of unnecessary biopsies. In addition, increasing sensitivity is also challenging given that the presence of artifacts in the B-mode ultrasound (US) images can interfere with lesion detection. To deal with these problems and improve diagnosis accuracy, ultrasound elastography was introduced. This paper validates a novel lesion segmentation framework that takes intensity (B-mode) and strain information into account using a Markov Random Field (MRF) and a Maximum a Posteriori (MAP) approach, by applying it to clinical data. A total of 33 images from two different hospitals are used, composed of 14 cancerous and 19 benign lesions. Results show that combining both the B-mode and strain data in a unique framework improves segmentation results for cancerous lesions (Dice Similarity Coefficient of 0.49 using B-mode, while including strain data reaches 0.70), which are difficult images where the lesions appear with blurred and not well-defined boundaries.
Collapse
Affiliation(s)
- Gerard Pons
- Department of Computer Architecture and Technology, University of Girona, Girona, Spain
| | - Joan Martí
- Department of Computer Architecture and Technology, University of Girona, Girona, Spain
| | - Robert Martí
- Department of Computer Architecture and Technology, University of Girona, Girona, Spain
| | - Sergi Ganau
- Radiology Department, UDIAT-Centre Diagnòstic, Corporació Parc Taulí, Sabadell, Spain
| | - J Alison Noble
- Department of Engineering Science, Institute of Biomedical Engineering, Old Road Campus Research Building, University of Oxford, Oxford, UK
| |
Collapse
|
39
|
Gu P, Lee WM, Roubidoux MA, Yuan J, Wang X, Carson PL. Automated 3D ultrasound image segmentation to aid breast cancer image interpretation. ULTRASONICS 2016; 65:51-8. [PMID: 26547117 PMCID: PMC4702489 DOI: 10.1016/j.ultras.2015.10.023] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2015] [Revised: 10/20/2015] [Accepted: 10/23/2015] [Indexed: 05/18/2023]
Abstract
Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.
Collapse
Affiliation(s)
- Peng Gu
- Department of Electronic Science and Engineering, Nanjing University, 210093, China
| | - Won-Mean Lee
- Department of Radiology, University of Michigan, 48109, USA
| | | | - Jie Yuan
- Department of Electronic Science and Engineering, Nanjing University, 210093, China.
| | - Xueding Wang
- Department of Radiology, University of Michigan, 48109, USA
| | - Paul L Carson
- Department of Radiology, University of Michigan, 48109, USA.
| |
Collapse
|
40
|
Guo Y, Şengür A, Tian JW. A novel breast ultrasound image segmentation algorithm based on neutrosophic similarity score and level set. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2016; 123:43-53. [PMID: 26483304 DOI: 10.1016/j.cmpb.2015.09.007] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/23/2015] [Revised: 09/02/2015] [Accepted: 09/08/2015] [Indexed: 06/05/2023]
Abstract
Breast ultrasound (BUS) image segmentation is a challenging task due to the speckle noise, poor quality of the ultrasound images and size and location of the breast lesions. In this paper, we propose a new BUS image segmentation algorithm based on neutrosophic similarity score (NSS) and level set algorithm. At first, the input BUS image is transferred to the NS domain via three membership subsets T, I and F, and then, a similarity score NSS is defined and employed to measure the belonging degree to the true tumor region. Finally, the level set method is used to segment the tumor from the background tissue region in the NSS image. Experiments have been conducted on a variety of clinical BUS images. Several measurements are used to evaluate and compare the proposed method's performance. The experimental results demonstrate that the proposed method is able to segment the BUS images effectively and accurately.
Collapse
Affiliation(s)
- Yanhui Guo
- Department of Computer Science, University of Illinois at Springfield, Springfield, IL, USA.
| | - Abdulkadir Şengür
- Department of Electric and Electronics Engineering, Technology Faculty, Firat University, Elazig, Turkey
| | - Jia-Wei Tian
- Department of Ultrasound, Second Affiliated Hospital of Harbin Medical, Harbin, Heilongjiang, China
| |
Collapse
|
41
|
Huang Q, Yang F, Liu L, Li X. Automatic segmentation of breast lesions for interaction in ultrasonic computer-aided diagnosis. Inf Sci (N Y) 2015. [DOI: 10.1016/j.ins.2014.08.021] [Citation(s) in RCA: 72] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
42
|
Rodrigues R, Braz R, Pereira M, Moutinho J, Pinheiro AMG. A Two-Step Segmentation Method for Breast Ultrasound Masses Based on Multi-resolution Analysis. ULTRASOUND IN MEDICINE & BIOLOGY 2015; 41:1737-1748. [PMID: 25736608 DOI: 10.1016/j.ultrasmedbio.2015.01.012] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/18/2014] [Revised: 01/01/2015] [Accepted: 01/16/2015] [Indexed: 06/04/2023]
Abstract
Breast ultrasound images have several attractive properties that make them an interesting tool in breast cancer detection. However, their intrinsic high noise rate and low contrast turn mass detection and segmentation into a challenging task. In this article, a fully automated two-stage breast mass segmentation approach is proposed. In the initial stage, ultrasound images are segmented using support vector machine or discriminant analysis pixel classification with a multiresolution pixel descriptor. The features are extracted using non-linear diffusion, bandpass filtering and scale-variant mean curvature measures. A set of heuristic rules complement the initial segmentation stage, selecting the region of interest in a fully automated manner. In the second segmentation stage, refined segmentation of the area retrieved in the first stage is attempted, using two different techniques. The AdaBoost algorithm uses a descriptor based on scale-variant curvature measures and non-linear diffusion of the original image at lower scales, to improve the spatial accuracy of the ROI. Active contours use the segmentation results from the first stage as initial contours. Results for both proposed segmentation paths were promising, with normalized Dice similarity coefficients of 0.824 for AdaBoost and 0.813 for active contours. Recall rates were 79.6% for AdaBoost and 77.8% for active contours, whereas the precision rate was 89.3% for both methods.
Collapse
Affiliation(s)
- Rafael Rodrigues
- Optics Center, Universidade da Beira Interior, Covilhã, Portugal.
| | - Rui Braz
- Instituto de Telecomunicações, Universidade da Beira Interior, Covilhã, Portugal
| | - Manuela Pereira
- Instituto de Telecomunicações, Universidade da Beira Interior, Covilhã, Portugal
| | - José Moutinho
- Faculty of Health Sciences, Universidade da Beira Interior, Covilhã, Portugal
| | | |
Collapse
|
43
|
Zhou Z, Wu S, Chang KJ, Chen WR, Chen YS, Kuo WH, Lin CC, Tsui PH. Classification of Benign and Malignant Breast Tumors in Ultrasound Images with Posterior Acoustic Shadowing Using Half-Contour Features. J Med Biol Eng 2015; 35:178-187. [PMID: 25960706 PMCID: PMC4414937 DOI: 10.1007/s40846-015-0031-x] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2014] [Accepted: 06/16/2014] [Indexed: 12/01/2022]
Abstract
Posterior acoustic shadowing (PAS) can bias breast tumor segmentation and classification in ultrasound images. In this paper, half-contour features are proposed to classify benign and malignant breast tumors with PAS, considering the fact that the upper half of the tumor contour is less affected by PAS. Adaptive thresholding and disk expansion are employed to detect tumor contours. Based on the detected full contour, the upper half contour is extracted. For breast tumor classification, six quantitative feature parameters are analyzed for both full contours and half contours, including standard deviation of degree (SDD), which is proposed to describe tumor irregularity. Fifty clinical cases (40 with PAS and 10 without PAS) were used. Tumor circularity (TC) and SDD were both effective full- and half-contour parameters in classifying images without PAS. Half-contour TC [74 % accuracy, 72 % sensitivity, 76 % specificity, 0.78 area under the receiver operating characteristic curve (AUC), p > 0.05] significantly improved the classification of breast tumors with PAS compared to that with full-contour TC (54 % accuracy, 56 % sensitivity, 52 % specificity, 0.52 AUC, p > 0.05). Half-contour SDD (72 % accuracy, 76 % sensitivity, 68 % specificity, 0.81 AUC, p < 0.05) improved the classification of breast tumors with PAS compared to that with full-contour SDD (62 % accuracy, 80 % sensitivity, 44 % specificity, 0.61 AUC, p > 0.05). The proposed half-contour TC and SDD may be useful in classifying benign and malignant breast tumors in ultrasound images affected by PAS.
Collapse
Affiliation(s)
- Zhuhuang Zhou
- Biomedical Engineering Center, College of Life Science and Bioengineering, Beijing University of Technology, Beijing, 100124 China
| | - Shuicai Wu
- Biomedical Engineering Center, College of Life Science and Bioengineering, Beijing University of Technology, Beijing, 100124 China
| | - King-Jen Chang
- Department of Surgery, Cheng Ching General Hospital, Chung Kang Branch, Taichung, 407 Taiwan
- Department of Surgery, National Taiwan University Hospital, Taipei, 10048 Taiwan
| | - Wei-Ren Chen
- Department of Electrical Engineering, Yuan Ze University, Chung Li, 32003 Taiwan
| | - Yung-Sheng Chen
- Department of Electrical Engineering, Yuan Ze University, Chung Li, 32003 Taiwan
| | - Wen-Hung Kuo
- Department of Surgery, National Taiwan University Hospital, Taipei, 10048 Taiwan
| | - Chung-Chih Lin
- Department of Computer Science and Information Engineering, Chang Gung University, Taoyuan, 33302 Taiwan
| | - Po-Hsiang Tsui
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan, 33302 Taiwan
- Institute of Radiological Research, Chang Gung University and Hospital, Taoyuan, 33302 Taiwan
| |
Collapse
|
44
|
A hybrid segmentation method based on Gaussian kernel fuzzy clustering and region based active contour model for ultrasound medical images. Biomed Signal Process Control 2015. [DOI: 10.1016/j.bspc.2014.09.013] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
45
|
Ye C, Vaidya V, Zhao F. Improved mass detection in 3D automated breast ultrasound using region based features and multi-view information. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:2865-8. [PMID: 25570589 DOI: 10.1109/embc.2014.6944221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Breast cancer is one of the leading causes of cancer death for women. Early detection of breast cancer is crucial for reducing mortality rates and improving prognosis of patients. Recently, 3D automated breast ultrasound (ABUS) has gained increasing attentions for reducing subjectivity, operator-dependence, and providing 3D context of the whole breast. In this work, we propose a breast mass detection algorithm improving voxel-based detection results by incorporating 3D region-based features and multi-view information in 3D ABUS images. Based on the candidate mass regions produced by voxel-based method, our proposed approach further improves the detection results with three major steps: 1) 3D mass segmentation in geodesic active contours framework with edge points obtained from directional searching; 2) region-based single-view and multi-view feature extraction; 3) support vector machine (SVM) classification to discriminate candidate regions as breast masses or normal background tissues. 22 patients including 51 3D ABUS volumes with 44 breast masses were used for evaluation. The proposed approach reached sensitivities of 95%, 90%, and 70% with averaged 4.3, 3.8, and 1.6 false positives per volume, respectively. The results also indicate that the multi-view information plays an important role in false positive reduction in 3D breast mass detection.
Collapse
|
46
|
A review of ultrasound common carotid artery image and video segmentation techniques. Med Biol Eng Comput 2014; 52:1073-93. [PMID: 25284219 DOI: 10.1007/s11517-014-1203-5] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2013] [Accepted: 09/22/2014] [Indexed: 10/24/2022]
|
47
|
Hansson M, Brandt SS, Lindström J, Gudmundsson P, Jujić A, Malmgren A, Cheng Y. Segmentation of B-mode cardiac ultrasound data by Bayesian Probability Maps. Med Image Anal 2014; 18:1184-99. [DOI: 10.1016/j.media.2014.06.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2013] [Revised: 06/02/2014] [Accepted: 06/13/2014] [Indexed: 10/25/2022]
|
48
|
Zhou Z, Wu W, Wu S, Tsui PH, Lin CC, Zhang L, Wang T. Semi-automatic breast ultrasound image segmentation based on mean shift and graph cuts. ULTRASONIC IMAGING 2014; 36:256-276. [PMID: 24759696 DOI: 10.1177/0161734614524735] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Computerized tumor segmentation on breast ultrasound (BUS) images remains a challenging task. In this paper, we proposed a new method for semi-automatic tumor segmentation on BUS images using Gaussian filtering, histogram equalization, mean shift, and graph cuts. The only interaction required was to select two diagonal points to determine a region of interest (ROI) on an input image. The ROI image was shrunken by a factor of 2 using bicubic interpolation to reduce computation time. The shrunken image was smoothed by a Gaussian filter and then contrast-enhanced by histogram equalization. Next, the enhanced image was filtered by pyramid mean shift to improve homogeneity. The object and background seeds for graph cuts were automatically generated on the filtered image. Using these seeds, the filtered image was then segmented by graph cuts into a binary image containing the object and background. Finally, the binary image was expanded by a factor of 2 using bicubic interpolation, and the expanded image was processed by morphological opening and closing to refine the tumor contour. The method was implemented with OpenCV 2.4.3 and Visual Studio 2010 and tested for 38 BUS images with benign tumors and 31 BUS images with malignant tumors from different ultrasound scanners. Experimental results showed that our method had a true positive rate (TP) of 91.7%, a false positive (FP) rate of 11.9%, and a similarity (SI) rate of 85.6%. The mean run time on Intel Core 2.66 GHz CPU and 4 GB RAM was 0.49 ± 0.36 s. The experimental results indicate that the proposed method may be useful in BUS image segmentation.
Collapse
Affiliation(s)
- Zhuhuang Zhou
- College of Life Science and Bioengineering, Beijing University of Technology, Beijing, China
| | - Weiwei Wu
- College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing, China
| | - Shuicai Wu
- College of Life Science and Bioengineering, Beijing University of Technology, Beijing, China
| | - Po-Hsiang Tsui
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Chung-Chih Lin
- Department of Computer Science and Information Engineering, Chang Gung University, Taoyuan, Taiwan
| | - Ling Zhang
- Department of Biomedical Engineering, Shenzhen University, Shenzhen, Guangdong, China
| | - Tianfu Wang
- Department of Biomedical Engineering, Shenzhen University, Shenzhen, Guangdong, China
| |
Collapse
|
49
|
Ciurte A, Bresson X, Cuisenaire O, Houhou N, Nedevschi S, Thiran JP, Cuadra MB. Semi-supervised segmentation of ultrasound images based on patch representation and continuous min cut. PLoS One 2014; 9:e100972. [PMID: 25010530 PMCID: PMC4091944 DOI: 10.1371/journal.pone.0100972] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Accepted: 06/01/2014] [Indexed: 11/18/2022] Open
Abstract
Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye). We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average) and the proposed algorithm performs favorably with the literature.
Collapse
Affiliation(s)
- Anca Ciurte
- Department of Computer Science, Technical University of Cluj-Napoca, Cluj-Napoca, Romania
- Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- * E-mail:
| | - Xavier Bresson
- Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Center for Biomedical Imaging, Signal Processing Core, Lausanne, Switzerland
| | - Olivier Cuisenaire
- Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Center for Biomedical Imaging, Signal Processing Core, Lausanne, Switzerland
| | - Nawal Houhou
- Swiss Institute of Bioinformatics (SIB), University Hospital Center and University of Lausanne, Lausanne, Switzerland
| | - Sergiu Nedevschi
- Department of Computer Science, Technical University of Cluj-Napoca, Cluj-Napoca, Romania
| | - Jean-Philippe Thiran
- Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland
| | - Meritxell Bach Cuadra
- Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Center for Biomedical Imaging, Signal Processing Core, Lausanne, Switzerland
| |
Collapse
|
50
|
Wang W, Qin J, Chui YP, Heng PA. A multiresolution framework for ultrasound image segmentation by combinative active contours. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2014; 2013:1144-7. [PMID: 24109895 DOI: 10.1109/embc.2013.6609708] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
We propose a novel multiresolution framework for ultrasound image segmentation in this paper. The framework exploits both local intensity and local phase information to tackle the degradations of ultrasound images. First, multiresolution scheme is adopted to build a Gaussian pyramid for each speckled image. Speckle noise is gradually smoothed out at higher levels of the pyramid. Then local intensity-driven active contours are employed to locate the coarse contour of the target from the coarsest image, followed by local phase-based geodesic active contours to further refine the contour in finer images. Compared with traditional gradient-based methods, phase-based methods are more suitable for ultrasound images because they are invariant to variations in image contrast. Experimental results on left ventricle segmentation from echocardiographic images demonstrate the advantages of the proposed model.
Collapse
|