1
|
Tao X, Cao Y, Jiang Y, Wu X, Yan D, Xue W, Zhuang S, Yang X, Huang R, Zhang J, Ni D. Enhancing lesion detection in automated breast ultrasound using unsupervised multi-view contrastive learning with 3D DETR. Med Image Anal 2025; 101:103466. [PMID: 39854815 DOI: 10.1016/j.media.2025.103466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 12/28/2024] [Accepted: 01/09/2025] [Indexed: 01/27/2025]
Abstract
The inherent variability of lesions poses challenges in leveraging AI in 3D automated breast ultrasound (ABUS) for lesion detection. Traditional methods based on single scans have fallen short compared to comprehensive evaluations by experienced sonologists using multiple scans. To address this, our study introduces an innovative approach combining the multi-view co-attention mechanism (MCAM) with unsupervised contrastive learning. Rooted in the detection transformer (DETR) architecture, our model employs a one-to-many matching strategy, significantly boosting training efficiency and lesion recall metrics. The model integrates MCAM within the decoder, facilitating the interpretation of lesion data across diverse views. Simultaneously, unsupervised multi-view contrastive learning (UMCL) aligns features consistently across scans, improving detection performance. When tested on two multi-center datasets comprising 1509 patients, our approach outperforms existing state-of-the-art 3D detection models. Notably, our model achieves a 90.3% cancer detection rate with a false positive per image (FPPI) rate of 0.5 on the external validation dataset. This surpasses junior sonologists and matches the performance of seasoned experts.
Collapse
Affiliation(s)
- Xing Tao
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Yan Cao
- Shenzhen RayShape Medical Technology Co., Ltd, Shenzhen, China
| | - Yanhui Jiang
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xiaoxi Wu
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Dan Yan
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Wen Xue
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Shulian Zhuang
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xin Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Ruobing Huang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China.
| | - Jianxing Zhang
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China.
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China; School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China.
| |
Collapse
|
2
|
Xie Z, Sun Q, Han J, Sun P, Hu X, Ji N, Xu L, Ma J. Spectral analysis enhanced net (SAE-Net) to classify breast lesions with BI-RADS category 4 or higher. ULTRASONICS 2024; 143:107406. [PMID: 39047350 DOI: 10.1016/j.ultras.2024.107406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 06/25/2024] [Accepted: 07/15/2024] [Indexed: 07/27/2024]
Abstract
Early ultrasound screening for breast cancer reduces mortality significantly. The main evaluation criterion for breast ultrasound screening is the Breast Imaging-Reporting and Data System (BI-RADS), which categorizes breast lesions into categories 0-6 based on ultrasound grayscale images. Due to the limitations of ultrasound grayscale imaging, lesions with categories 4 and 5 necessitate additional biopsy for the confirmation of benign or malignant status. In this paper, the SAE-Net was proposed to combine the tissue microstructure information with the morphological information, thus improving the identification of high-grade breast lesions. The SAE-Net consists of a grayscale image branch and a spectral pattern branch. The grayscale image branch used the classical deep learning backbone model to learn the image morphological features from grayscale images, while the spectral pattern branch is designed to learn the microstructure features from ultrasound radio frequency (RF) signals. Our experimental results show that the best SAE-Net model has an area under the receiver operating characteristic curve (AUROC) of 12% higher and a Youden index of 19% higher than the single backbone model. These results demonstrate the effectiveness of our method, which potentially optimizes biopsy exemption and diagnostic efficiency.
Collapse
Affiliation(s)
- Zhun Xie
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Qizhen Sun
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Jiaqi Han
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Pengfei Sun
- Department of Ultrasound, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China
| | - Xiangdong Hu
- Department of Ultrasound, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China
| | - Nan Ji
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Lijun Xu
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Jianguo Ma
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China.
| |
Collapse
|
3
|
Oh K, Lee SE, Kim EK. 3-D breast nodule detection on automated breast ultrasound using faster region-based convolutional neural networks and U-Net. Sci Rep 2023; 13:22625. [PMID: 38114666 PMCID: PMC10730541 DOI: 10.1038/s41598-023-49794-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 12/12/2023] [Indexed: 12/21/2023] Open
Abstract
Mammography is currently the most commonly used modality for breast cancer screening. However, its sensitivity is relatively low in women with dense breasts. Dense breast tissues show a relatively high rate of interval cancers and are at high risk for developing breast cancer. As a supplemental screening tool, ultrasonography is a widely adopted imaging modality to standard mammography, especially for dense breasts. Lately, automated breast ultrasound imaging has gained attention due to its advantages over hand-held ultrasound imaging. However, automated breast ultrasound imaging requires considerable time and effort for reading because of the lengthy data. Hence, developing a computer-aided nodule detection system for automated breast ultrasound is invaluable and impactful practically. This study proposes a three-dimensional breast nodule detection system based on a simple two-dimensional deep-learning model exploiting automated breast ultrasound. Additionally, we provide several postprocessing steps to reduce false positives. In our experiments using the in-house automated breast ultrasound datasets, a sensitivity of [Formula: see text] with 8.6 false positives is achieved on unseen test data at best.
Collapse
Affiliation(s)
- Kangrok Oh
- Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, 03722, Republic of Korea
| | - Si Eun Lee
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin, Gyeonggi-do, 16995, Republic of Korea
| | - Eun-Kyung Kim
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin, Gyeonggi-do, 16995, Republic of Korea.
| |
Collapse
|
4
|
Qiu C, Huang Z, Lin C, Zhang G, Ying S. A despeckling method for ultrasound images utilizing content-aware prior and attention-driven techniques. Comput Biol Med 2023; 166:107515. [PMID: 37839221 DOI: 10.1016/j.compbiomed.2023.107515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 08/25/2023] [Accepted: 09/19/2023] [Indexed: 10/17/2023]
Abstract
The despeckling of ultrasound images contributes to the enhancement of image quality and facilitates precise treatment of conditions such as tumor cancers. However, the use of existing methods for eliminating speckle noise can cause the loss of image texture features, impacting clinical judgment. Thus, maintaining clear lesion boundaries while eliminating speckle noise is a challenging task. This paper presents an innovative approach for denoising ultrasound images using a novel noise reduction network model called content-aware prior and attention-driven (CAPAD). The model employs a neural network to automatically capture the hidden prior features in ultrasound images to guide denoising and embeds the denoiser into the optimization module to simultaneously optimize parameters and noise. Moreover, this model incorporates a content-aware attention module and a loss function that preserves the structural characteristics of the image. These additions enhance the network's capacity to capture and retain valuable information. Extensive qualitative evaluation and quantitative analysis performed on a comprehensive dataset provide compelling evidence of the model's superior denoising capabilities. It excels in noise suppression while successfully preserving the underlying structures within the ultrasound images. Compared to other denoising algorithms, it demonstrates an improvement of approximately 5.88% in PSNR and approximately 3.61% in SSIM. Furthermore, using CAPAD as a preprocessing step for breast tumor segmentation in ultrasound images can greatly improve the accuracy of image segmentation. The experimental results indicate that the utilization of CAPAD leads to a notable enhancement of 10.43% in the AUPRC for breast cancer tumor segmentation.
Collapse
Affiliation(s)
- Chenghao Qiu
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610000, Sichuan, China.
| | - Zifan Huang
- School of Electronics and Information Engineering, Guangdong Ocean University, Zhanjiang, 524088, China.
| | - Cong Lin
- School of Electronics and Information Engineering, Guangdong Ocean University, Zhanjiang, 524088, China.
| | - Guodao Zhang
- Department of Digital Media Technology, Hangzhou Dianzi University, Hangzhou, 310018, China.
| | - Shenpeng Ying
- Department of Radiotherapy, Taizhou Central Hospital (Taizhou University Hospital), Taizhou, 318000, China.
| |
Collapse
|
5
|
Malekmohammadi A, Barekatrezaei S, Kozegar E, Soryani M. Mass detection in automated 3-D breast ultrasound using a patch Bi-ConvLSTM network. ULTRASONICS 2023; 129:106891. [PMID: 36493507 DOI: 10.1016/j.ultras.2022.106891] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Revised: 10/27/2022] [Accepted: 11/13/2022] [Indexed: 06/17/2023]
Abstract
Breast cancer mortality can be significantly reduced by early detection of its symptoms. The 3-D Automated Breast Ultrasound (ABUS) has been widely used for breast screening due to its high sensitivity and reproducibility. The large number of ABUS slices, and high variation in size and shape of the masses, make the manual evaluation a challenging and time-consuming process. To assist the radiologists, we propose a convolutional BiLSTM network to classify the slices based on the presence of a mass. Because of its patch-based architecture, this model produces the approximate location of masses as a heat map. The prepared dataset consists of 60 volumes belonging to 43 patients. The precision, recall, accuracy, F1-score, and AUC of the proposed model for slice classification were 84%, 84%, 93%, 84%, and 97%, respectively. Based on the FROC analysis, the proposed detector obtained a sensitivity of 82% with two false positives per volume.
Collapse
Affiliation(s)
- Amin Malekmohammadi
- School of Computer Engineering, Iran University of Science and Technology (IUST), Tehran 16846, Iran.
| | - Sepideh Barekatrezaei
- School of Computer Engineering, Iran University of Science and Technology (IUST), Tehran 16846, Iran.
| | - Ehsan Kozegar
- Faculty of Technology and Engineering-East of Guilan, University of Guilan, Vajargah, Rudsar, Guilan 4199613776, Iran.
| | - Mohsen Soryani
- School of Computer Engineering, Iran University of Science and Technology (IUST), Tehran 16846, Iran.
| |
Collapse
|
6
|
Lee H, Lee MH, Youn S, Lee K, Lew HM, Hwang JY. Speckle Reduction via Deep Content-Aware Image Prior for Precise Breast Tumor Segmentation in an Ultrasound Image. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:2638-2650. [PMID: 35877808 DOI: 10.1109/tuffc.2022.3193640] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The performance of computer-aided diagnosis (CAD) systems that are based on ultrasound imaging has been enhanced owing to the advancement in deep learning. However, because of the inherent speckle noise in ultrasound images, the ambiguous boundaries of lesions deteriorate and are difficult to distinguish, resulting in the performance degradation of CAD. Although several methods have been proposed to reduce speckle noise over decades, this task remains a challenge that must be improved to enhance the performance of CAD. In this article, we propose a deep content-aware image prior (DCAIP) with a content-aware attention module (CAAM) for superior despeckling of ultrasound images without clean images. For the image prior, we developed a CAAM to deal with the content information in an input image. In this module, super-pixel pooling (SPP) is used to give attention to salient regions in an ultrasound image. Therefore, it can provide more content information regarding the input image when compared to other attention modules. The DCAIP consists of deep learning networks based on this attention module. The DCAIP is validated by applying it as a preprocessing step for breast tumor segmentation in ultrasound images, which is one of the tasks in CAD. Our method improved the segmentation performance by 15.89% in terms of the area under the precision-recall (PR) curve (AUPRC). The results demonstrate that our method enhances the quality of ultrasound images by effectively reducing speckle noise while preserving important information in the image, promising for the design of superior CAD systems.
Collapse
|
7
|
Wang H, Yang X, Ma S, Zhu K, Guo S. An Optimized Radiomics Model Based on Automated Breast Volume Scan Images to Identify Breast Lesions: Comparison of Machine Learning Methods: Comparison of Machine Learning Methods. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2022; 41:1643-1655. [PMID: 34609750 DOI: 10.1002/jum.15845] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2021] [Revised: 08/17/2021] [Accepted: 09/05/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVES To develop and test an optimized radiomics model based on multi-planar automated breast volume scan (ABVS) images to identify malignant and benign breast lesions. METHODS Patients (n = 200) with breast lesions who underwent ABVS examinations were included. For each patient, 208 radiomics features were extracted from the ABVS images, including axial plane and coronal plane. Recursive feature elimination, random forest, and chi-square test were used to select features. A support vector machine, logistic regression, and extreme gradient boosting were utilized as classifiers to differentiate malignant and benign breast lesions. The area under the curve, sensitivity, specificity, accuracy, and precision was used to evaluate the performance of the radiomics models. Generalization of the radiomics models was verified through 5-fold cross-validation. RESULTS For a single plane or a combination of planes, a combination of recursive feature elimination, and support vector machine yielded the best performance when identifying breast lesions. The machine learning models based on a combination of planes performed better than those based on a single plane. Regarding the axial plane and coronal plane, the machine learning model using a combination of recursive feature elimination and support vector machine yielded the optimal identification performance: average area under the curve (0.857 ± 0.058, 95% confidence interval, 0.763-0.957); the average values of sensitivity, specificity, accuracy, and precision were 87.9, 68.2, 80.7, and 82.9%, respectively. CONCLUSIONS The optimized radiomics model based on ABVS images can provide valuable information for identifying benign and malignant breast lesions preoperatively and guide the accurate clinical treatment. Further external validation is required.
Collapse
Affiliation(s)
- Hui Wang
- The First Clinical Medical College, Lanzhou University, Lanzhou City, China
- Department of Ultrasound, The First Hospital of Lanzhou University, Lanzhou City, China
| | - Xinwu Yang
- College of Computer Science, Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Sumei Ma
- Department of Ultrasound, The First Hospital of Lanzhou University, Lanzhou City, China
| | - Kongqiang Zhu
- College of Computer Science, Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Shunlin Guo
- The First Clinical Medical College, Lanzhou University, Lanzhou City, China
- Department of Radiology, The First Hospital of Lanzhou University, Lanzhou City, China
| |
Collapse
|
8
|
Luo X, Xu M, Tang G, PhD YW, Wang N, PhD DN, PhD XL, Li AH. The lesion detection efficacy of deep learning on automatic breast ultrasound and factors affecting its efficacy: a pilot study. Br J Radiol 2022; 95:20210438. [PMID: 34860574 PMCID: PMC8822545 DOI: 10.1259/bjr.20210438] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Abstract
OBJECTIVES The aim of this study was to investigate the detection efficacy of deep learning (DL) for automatic breast ultrasound (ABUS) and factors affecting its efficacy. METHODS Females who underwent ABUS and handheld ultrasound from May 2016 to June 2017 (N = 397) were enrolled and divided into training (n = 163 patients with breast cancer and 33 with benign lesions), test (n = 57) and control (n = 144) groups. A convolutional neural network was optimized to detect lesions in ABUS. The sensitivity and false positives (FPs) were evaluated and compared for different breast tissue compositions, lesion sizes, morphologies and echo patterns. RESULTS In the training set, with 688 lesion regions (LRs), the network achieved sensitivities of 93.8%, 97.2% and 100%, based on volume, lesion and patient, respectively, with 1.9 FPs per volume. In the test group with 247 LRs, the sensitivities were 92.7%, 94.5% and 96.5%, respectively, with 2.4 FPs per volume. The control group, with 900 volumes, showed 0.24 FPs per volume. The sensitivity was 98% for lesions > 1 cm3, but 87% for those ≤1 cm3 (p < 0.05). Similar sensitivities and FPs were observed for different breast tissue compositions (homogeneous, 97.5%, 2.1; heterogeneous, 93.6%, 2.1), lesion morphologies (mass, 96.3%, 2.1; non-mass, 95.8%, 2.0) and echo patterns (homogeneous, 96.1%, 2.1; heterogeneous 96.8%, 2.1). CONCLUSIONS DL had high detection sensitivity with a low FP but was affected by lesion size. ADVANCES IN KNOWLEDGE DL is technically feasible for the automatic detection of lesions in ABUS.
Collapse
Affiliation(s)
| | | | | | - Yi Wang PhD
- National Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China, and also with the Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen, China
| | - Na Wang
- National Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China, and also with the Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen, China
| | - Dong Ni PhD
- National Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China, and also with the Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen, China
| | | | | |
Collapse
|
9
|
An Automatic Procedure for Overheated Idler Detection in Belt Conveyors Using Fusion of Infrared and RGB Images Acquired during UGV Robot Inspection. ENERGIES 2022. [DOI: 10.3390/en15020601] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Complex mechanical systems used in the mining industry for efficient raw materials extraction require proper maintenance. Especially in a deep underground mine, the regular inspection of machines operating in extremely harsh conditions is challenging, thus, monitoring systems and autonomous inspection robots are becoming more and more popular. In the paper, it is proposed to use a mobile unmanned ground vehicle (UGV) platform equipped with various data acquisition systems for supporting inspection procedures. Although maintenance staff with appropriate experience are able to identify problems almost immediately, due to mentioned harsh conditions such as temperature, humidity, poisonous gas risk, etc., their presence in dangerous areas is limited. Thus, it is recommended to use inspection robots collecting data and appropriate algorithms for their processing. In this paper, the authors propose red-green-blue (RGB) and infrared (IR) image fusion to detect overheated idlers. An original procedure for image processing is proposed, that exploits some characteristic features of conveyors to pre-process the RGB image to minimize non-informative components in the pictures collected by the robot. Then, the authors use this result for IR image processing to improve SNR and finally detect hot spots in IR image. The experiments have been performed on real conveyors operating in industrial conditions.
Collapse
|
10
|
Abstract
Dental Caries are one of the most prevalent chronic diseases around the globe. Detecting carious lesions is a challenging task. Conventional computer aided diagnosis and detection methods in the past have heavily relied on the visual inspection of teeth. These methods are only effective on large and clearly visible caries on affected teeth. Conventional methods have been limited in performance due to the complex visual characteristics of dental caries images, which consist of hidden or inaccessible lesions. The early detection of dental caries is an important determinant for treatment and benefits much from the introduction of new tools, such as dental radiography. In this paper, we propose a deep learning-based technique for dental caries detection namely: blob detection. The proposed technique automatically detects hidden and inaccessible dental caries lesions in bitewing radio-graphs. The approach employs data augmentation to increase the number of images in the data set to have a total of 11,114 dental images. Image pre-processing on the data set was through the use of Gaussian blur filters. Image segmentation was handled through thresholding, erosion and dilation morphology, while image boundary detection was achieved through active contours method. Furthermore, the deep learning based network through the sequential model in Keras extracts features from the images through blob detection. Finally, a convexity threshold value of 0.9 is introduced to aid in the classification of caries as either present or not present. The process of detection and classifying dental caries achieved the results of 97% and 96% for the precision and recall values, respectively.
Collapse
|
11
|
|
12
|
Lei Y, He X, Yao J, Wang T, Wang L, Li W, Curran WJ, Liu T, Xu D, Yang X. Breast tumor segmentation in 3D automatic breast ultrasound using Mask scoring R-CNN. Med Phys 2021; 48:204-214. [PMID: 33128230 DOI: 10.1002/mp.14569] [Citation(s) in RCA: 57] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 10/20/2020] [Accepted: 10/20/2020] [Indexed: 12/24/2022] Open
Abstract
PURPOSE Automatic breast ultrasound (ABUS) imaging has become an essential tool in breast cancer diagnosis since it provides complementary information to other imaging modalities. Lesion segmentation on ABUS is a prerequisite step of breast cancer computer-aided diagnosis (CAD). This work aims to develop a deep learning-based method for breast tumor segmentation using three-dimensional (3D) ABUS automatically. METHODS For breast tumor segmentation in ABUS, we developed a Mask scoring region-based convolutional neural network (R-CNN) that consists of five subnetworks, that is, a backbone, a regional proposal network, a region convolutional neural network head, a mask head, and a mask score head. A network block building direct correlation between mask quality and region class was integrated into a Mask scoring R-CNN based framework for the segmentation of new ABUS images with ambiguous regions of interest (ROIs). For segmentation accuracy evaluation, we retrospectively investigated 70 patients with breast tumor confirmed with needle biopsy and manually delineated on ABUS, of which 40 were used for fivefold cross-validation and 30 were used for hold-out test. The comparison between the automatic breast tumor segmentations and the manual contours was quantified by I) six metrics including Dice similarity coefficient (DSC), Jaccard index, 95% Hausdorff distance (HD95), mean surface distance (MSD), residual mean square distance (RMSD), and center of mass distance (CMD); II) Pearson correlation analysis and Bland-Altman analysis. RESULTS The mean (median) DSC was 85% ± 10.4% (89.4%) and 82.1% ± 14.5% (85.6%) for cross-validation and hold-out test, respectively. The corresponding HD95, MSD, RMSD, and CMD of the two tests was 1.646 ± 1.191 and 1.665 ± 1.129 mm, 0.489 ± 0.406 and 0.475 ± 0.371 mm, 0.755 ± 0.755 and 0.751 ± 0.508 mm, and 0.672 ± 0.612 and 0.665 ± 0.729 mm. The mean volumetric difference (mean and ± 1.96 standard deviation) was 0.47 cc ([-0.77, 1.71)) for the cross-validation and 0.23 cc ([-0.23 0.69]) for hold-out test, respectively. CONCLUSION We developed a novel Mask scoring R-CNN approach for the automated segmentation of the breast tumor in ABUS images and demonstrated its accuracy for breast tumor segmentation. Our learning-based method can potentially assist the clinical CAD of breast cancer using 3D ABUS imaging.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiuxiu He
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jincao Yao
- Cancer Hospital of the University of Chinese Academy of Sciences, Zhejiang Cancer Hospital
- Institute of Cancer and Basic Medicine (IBMC), Chinese Academy of Sciences, Hangzhou, 310022, China
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Lijing Wang
- Cancer Hospital of the University of Chinese Academy of Sciences, Zhejiang Cancer Hospital
- Institute of Cancer and Basic Medicine (IBMC), Chinese Academy of Sciences, Hangzhou, 310022, China
| | - Wei Li
- Cancer Hospital of the University of Chinese Academy of Sciences, Zhejiang Cancer Hospital
- Institute of Cancer and Basic Medicine (IBMC), Chinese Academy of Sciences, Hangzhou, 310022, China
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Dong Xu
- Cancer Hospital of the University of Chinese Academy of Sciences, Zhejiang Cancer Hospital
- Institute of Cancer and Basic Medicine (IBMC), Chinese Academy of Sciences, Hangzhou, 310022, China
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
13
|
Chiu LY, Kuo WH, Chen CN, Chang KJ, Chen A. A 2-Phase Merge Filter Approach to Computer-Aided Detection of Breast Tumors on 3-Dimensional Ultrasound Imaging. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2020; 39:2439-2455. [PMID: 32567133 DOI: 10.1002/jum.15365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 05/13/2020] [Accepted: 05/15/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVES The role of image analysis in 3-dimensional (3D) automated breast ultrasound (ABUS) images is increasingly important because of its widespread use as a screening tool in whole-breast examinations. However, reviewing a large number of images acquired from ABUS is time-consuming and sometimes error prone. The aim of this study, therefore, was to develop an efficient computer-aided detection (CADe) algorithm to assist the review process. METHODS The proposed CADe algorithm consisted of 4 major steps. First, initial tumor candidates were formed by extracting and merging hypoechoic square cells on 2-dimensional (2D) transverse images. Second, a feature-based classifier was then constructed using 2D features to filter out nontumor candidates. Third, the remaining 2D candidates were merged longitudinally into 3D masses. Finally, a 3D feature-based classifier was used to further filter out nontumor masses to obtain the final detected masses. The proposed method was validated with 176 passes of breast images acquired by an Acuson S2000 automated breast volume scanner (Siemens Medical Solutions USA, Inc., Malvern, PA), including 44 normal passes and 132 abnormal passes containing 162 proven lesions (79 benign and 83 malignant). RESULTS The proposed CADe system could achieve overall sensitivity of 100% and 90% with 6.71 and 5.14 false-positives (FPs) per pass, respectively. Our results also showed that the average number of FPs per normal pass (7.16) was more than the number of FPs per abnormal pass (6.56) at 100% sensitivity. CONCLUSIONS The proposed CADe system has a great potential for becoming a good companion tool with ABUS imaging by ensuring high sensitivity with a relatively small number of FPs.
Collapse
Affiliation(s)
- Ling-Ying Chiu
- Institute of Industrial Engineering, National Taiwan University, Taipei, Taiwan
| | - Wen-Hung Kuo
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Chiung-Nien Chen
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - King-Jen Chang
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Argon Chen
- Institute of Industrial Engineering, National Taiwan University, Taipei, Taiwan
- Department of Mechanical Engineering, National Taiwan University, Taipei, Taiwan
| |
Collapse
|
14
|
Kim J, Kim HJ, Kim C, Kim WH. Artificial intelligence in breast ultrasonography. Ultrasonography 2020; 40:183-190. [PMID: 33430577 PMCID: PMC7994743 DOI: 10.14366/usg.20117] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 11/12/2020] [Indexed: 12/13/2022] Open
Abstract
Although breast ultrasonography is the mainstay modality for differentiating between benign and malignant breast masses, it has intrinsic problems with false positives and substantial interobserver variability. Artificial intelligence (AI), particularly with deep learning models, is expected to improve workflow efficiency and serve as a second opinion. AI is highly useful for performing three main clinical tasks in breast ultrasonography: detection (localization/segmentation), differential diagnosis (classification), and prognostication (prediction). This article provides a current overview of AI applications in breast ultrasonography, with a discussion of methodological considerations in the development of AI models and an up-to-date literature review of potential clinical applications.
Collapse
Affiliation(s)
- Jaeil Kim
- School of Computer Science and Engineering, Kyungpook National University, Daegu, Korea
| | - Hye Jung Kim
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Chilgok Hospital, Daegu, Korea
| | - Chanho Kim
- School of Computer Science and Engineering, Kyungpook National University, Daegu, Korea
| | - Won Hwa Kim
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Chilgok Hospital, Daegu, Korea
| |
Collapse
|
15
|
Li Y, Wu W, Chen H, Cheng L, Wang S. 3D tumor detection in automated breast ultrasound using deep convolutional neural network. Med Phys 2020; 47:5669-5680. [PMID: 32970838 DOI: 10.1002/mp.14477] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2020] [Revised: 08/06/2020] [Accepted: 08/28/2020] [Indexed: 01/17/2023] Open
Affiliation(s)
- Yanfeng Li
- School of Electronic and Information Engineering Beijing Jiaotong University Beijing China
| | - Wen Wu
- School of Electronic and Information Engineering Beijing Jiaotong University Beijing China
| | - Houjin Chen
- School of Electronic and Information Engineering Beijing Jiaotong University Beijing China
| | - Lin Cheng
- Center for Breast People’s Hospital of Peking University Beijing China
| | - Shu Wang
- Center for Breast People’s Hospital of Peking University Beijing China
| |
Collapse
|
16
|
Rajasree R, Columbus CC, Shilaja C. Multiscale-based multimodal image classification of brain tumor using deep learning method. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-05332-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
17
|
Automatic detection of intracranial aneurysms in 3D-DSA based on a Bayesian optimized filter. Biomed Eng Online 2020; 19:73. [PMID: 32933534 PMCID: PMC7493845 DOI: 10.1186/s12938-020-00817-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2020] [Accepted: 09/08/2020] [Indexed: 12/20/2022] Open
Abstract
Background Intracranial aneurysm is a common type of cerebrovascular disease with a risk of devastating subarachnoid hemorrhage if it is ruptured. Accurate computer-aided detection of aneurysms can help doctors improve the diagnostic accuracy, and it is very helpful in reducing the risk of subarachnoid hemorrhage. Aneurysms are detected in 2D or 3D images from different modalities. 3D images can provide more vascular information than 2D images, and it is more difficult to detect. The detection performance of 2D images is related to the angle of view; it may take several angles to determine the aneurysm. As the gold standard for the diagnosis of vascular diseases, the detection on digital subtraction angiography (DSA) has more clinical value than other modalities. In this study, we proposed an adaptive multiscale filter to detect intracranial aneurysms on 3D-DSA. Methods Adaptive aneurysm detection consists of three parts. The first part is a filter based on Hessian matrix eigenvalues, whose parameters are automatically obtained by Bayesian optimization. The second part is aneurysm extraction based on region growth and adaptive thresholding. The third part is the iterative detection strategy for multiple aneurysms. Results The proposed method was quantitatively evaluated on data sets of 145 patients. The results showed a detection precision of 94.6%, and a sensitivity of 96.4% with a false-positive rate of 6.2%. Among aneurysms smaller than 5 mm, 93.9% were found. Compared with aneurysm detection on 2D-DSA, automatic detection on 3D-DSA can effectively reduce the misdiagnosis rate and obtain more accurate detection results. Compared with other modalities detection, we also get similar or better detection performance. Conclusions The experimental results show that the proposed method is stable and reliable for aneurysm detection, which provides an option for doctors to accurately diagnose aneurysms.
Collapse
|
18
|
Wang F, Liu X, Yuan N, Qian B, Ruan L, Yin C, Jin C. Study on automatic detection and classification of breast nodule using deep convolutional neural network system. J Thorac Dis 2020; 12:4690-4701. [PMID: 33145042 PMCID: PMC7578508 DOI: 10.21037/jtd-19-3013] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Backgrounds Conventional ultrasound manual scanning and artificial diagnosis approaches in breast are considered to be operator-dependence, slight slow and error-prone. In this study, we used Automated Breast Ultrasound (ABUS) machine for the scanning, and deep convolutional neural network (CNN) technology, a kind of Deep Learning (DL) algorithm, for the detection and classification of breast nodules, aiming to achieve the automatic and accurate diagnosis of breast nodules. Methods Two hundred and ninety-three lesions from 194 patients with definite pathological diagnosis results (117 benign and 176 malignancy) were recruited as case group. Another 70 patients without breast diseases were enrolled as control group. All the breast scans were carried out by an ABUS machine and then randomly divided into training set, verification set and test set, with a proportion of 7:1:2. In the training set, we constructed a detection model by a three-dimensionally U-shaped convolutional neural network (3D U-Net) architecture for the purpose of segment the nodules from background breast images. Processes such as residual block, attention connections, and hard mining were used to optimize the model while strategies of random cropping, flipping and rotation for data augmentation. In the test phase, the current model was compared with those in previously reported studies. In the verification set, the detection effectiveness of detection model was evaluated. In the classification phase, multiple convolutional layers and fully-connected layers were applied to set up a classification model, aiming to identify whether the nodule was malignancy. Results Our detection model yielded a sensitivity of 91% and 1.92 false positive subjects per automatically scanned imaging. The classification model achieved a sensitivity of 87.0%, a specificity of 88.0% and an accuracy of 87.5%. Conclusions Deep CNN combined with ABUS maybe a promising tool for easy detection and accurate diagnosis of breast nodule.
Collapse
Affiliation(s)
- Feiqian Wang
- Department of Ultrasound, The First Affiliated Hospital of Xi'an Jiaotong University, China
| | - Xiaotong Liu
- National Engineering Lab for Big Data Analytics, Xi'an Jiaotong University, Xi'an, China.,School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Na Yuan
- Department of Ultrasound, The First Affiliated Hospital of Xi'an Jiaotong University, China
| | - Buyue Qian
- National Engineering Lab for Big Data Analytics, Xi'an Jiaotong University, Xi'an, China.,School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Litao Ruan
- Department of Ultrasound, The First Affiliated Hospital of Xi'an Jiaotong University, China
| | - Changchang Yin
- National Engineering Lab for Big Data Analytics, Xi'an Jiaotong University, Xi'an, China.,School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Ciping Jin
- National Engineering Lab for Big Data Analytics, Xi'an Jiaotong University, Xi'an, China.,School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
19
|
Lei B, Huang S, Li H, Li R, Bian C, Chou YH, Qin J, Zhou P, Gong X, Cheng JZ. Self-co-attention neural network for anatomy segmentation in whole breast ultrasound. Med Image Anal 2020; 64:101753. [DOI: 10.1016/j.media.2020.101753] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 05/27/2020] [Accepted: 06/06/2020] [Indexed: 11/25/2022]
|
20
|
Moon WK, Huang YS, Hsu CH, Chang Chien TY, Chang JM, Lee SH, Huang CS, Chang RF. Computer-aided tumor detection in automated breast ultrasound using a 3-D convolutional neural network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 190:105360. [PMID: 32007838 DOI: 10.1016/j.cmpb.2020.105360] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Revised: 01/05/2020] [Accepted: 01/24/2020] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVES Automated breast ultrasound (ABUS) is a widely used screening modality for breast cancer detection and diagnosis. In this study, an effective and fast computer-aided detection (CADe) system based on a 3-D convolutional neural network (CNN) is proposed as the second reader for the physician in order to decrease the reviewing time and misdetection rate. METHODS Our CADe system uses the sliding window method, a CNN-based determining model, and a candidate aggregation algorithm. First, the sliding window method is performed to split the ABUS volume into volumes of interest (VOIs). Afterward, VOIs are selected as tumor candidates by our determining model. To achieve higher performance, focal loss and ensemble learning are used to solve data imbalance and reduce false positive (FP) and false negative (FN) rates. Because several selected candidates may be part of the same tumor and they may overlap each other, a candidate aggregation method is applied to merge the overlapping candidates into the final detection result. RESULTS In the experiments, 165 and 81 cases are utilized for training the system and evaluating system performance, respectively. On evaluation with the 81 cases, our system achieves sensitivities of 100% (81/81), 95.3% (77/81), and 90.9% (74/81) with FPs per pass (per case) of 21.6 (126.2), 6.0 (34.8), and 4.6 (27.1) respectively. According to the results, the number of FPs per pass (per case) can be diminished by 56.8% (57.1%) at a sensitivity of 95.3% based on our tumor detection model. CONCLUSIONS In conclusion, our CADe system using 3-D CNN with the focal loss and ensemble learning may have the capability of being a tumor detection system in ABUS image.
Collapse
Affiliation(s)
- Woo Kyung Moon
- Department of Radiology, Seoul National University Hospital and Seoul National University College of Medicine, South Korea
| | - Yao-Sian Huang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Chin-Hua Hsu
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Ting-Yin Chang Chien
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Jung Min Chang
- Department of Radiology, Seoul National University Hospital and Seoul National University College of Medicine, South Korea
| | - Su Hyun Lee
- Department of Radiology, Seoul National University Hospital and Seoul National University College of Medicine, South Korea
| | - Chiun-Sheng Huang
- Department of Surgery, National Taiwan University Hospital, Taipei, Taiwan
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan; Graduate Institute of Network and Multimedia, National Taiwan University, Taipei, Taiwan; Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan; MOST Joint Research Center for AI Technology and All Vista Healthcare, Taipei, Taiwan.
| |
Collapse
|
21
|
Shiji TP, Remya S, Lakshmanan R, Pratab T, Thomas V. Evolutionary intelligence for breast lesion detection in ultrasound images: A wavelet modulus maxima and SVM based approach. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2020. [DOI: 10.3233/jifs-179709] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- T. P. Shiji
- Department of Electronics Engineering, Model Engineering College, Kochi, India
| | - S. Remya
- Department of Electronics Engineering, Model Engineering College, Kochi, India
| | - Rekha Lakshmanan
- Department of Computer Engineering, KMEA College of Engineering, Kerala, India
| | | | - Vinu Thomas
- Department of Electronics Engineering, Model Engineering College, Kochi, India
| |
Collapse
|
22
|
Wang Y, Wang N, Xu M, Yu J, Qin C, Luo X, Yang X, Wang T, Li A, Ni D. Deeply-Supervised Networks With Threshold Loss for Cancer Detection in Automated Breast Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:866-876. [PMID: 31442972 DOI: 10.1109/tmi.2019.2936500] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
ABUS, or Automated breast ultrasound, is an innovative and promising method of screening for breast examination. Comparing to common B-mode 2D ultrasound, ABUS attains operator-independent image acquisition and also provides 3D views of the whole breast. Nonetheless, reviewing ABUS images is particularly time-intensive and errors by oversight might occur. For this study, we offer an innovative 3D convolutional network, which is used for ABUS for automated cancer detection, in order to accelerate reviewing and meanwhile to obtain high detection sensitivity with low false positives (FPs). Specifically, we offer a densely deep supervision method in order to augment the detection sensitivity greatly by effectively using multi-layer features. Furthermore, we suggest a threshold loss in order to present voxel-level adaptive threshold for discerning cancer vs. non-cancer, which can attain high sensitivity with low false positives. The efficacy of our network is verified from a collected dataset of 219 patients with 614 ABUS volumes, including 745 cancer regions, and 144 healthy women with a total of 900 volumes, without abnormal findings. Extensive experiments demonstrate our method attains a sensitivity of 95% with 0.84 FP per volume. The proposed network provides an effective cancer detection scheme for breast examination using ABUS by sustaining high sensitivity with low false positives. The code is publicly available at https://github.com/nawang0226/abus_code.
Collapse
|
23
|
Lee CY, Chang TF, Chou YH, Yang KC. Fully automated lesion segmentation and visualization in automated whole breast ultrasound (ABUS) images. Quant Imaging Med Surg 2020; 10:568-584. [PMID: 32269918 DOI: 10.21037/qims.2020.01.12] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Background The number of breast cancer patients has increased each year, and the demand for breast cancer detection has become quite large. There are many common breast cancer diagnostic tools. The latest automated whole breast ultrasound (ABUS) technology can obtain a complete breast tissue structure, which improves breast cancer detection technology. However, due to the large amount of ABUS image data, manual interpretation is time-consuming and labor-intensive. If there are lesions in multiple images, there may be some omissions. In addition, if further volume information or the three-dimensional shape of the lesion is needed for therapy, it is necessary to manually segment each lesion, which is inefficient for diagnosis. Therefore, automatic lesion segmentation for ABUS is an important issue for guiding therapy. Methods Due to the amount of speckle noise in an ultrasonic image and the low contrast of the lesion boundary, it is quite difficult to automatically segment the lesion. To address the above challenges, this study proposes an automated lesion segmentation algorithm. The architecture of the proposed algorithm can be divided into four parts: (I) volume of interest selection, (II) preprocessing, (III) segmentation, and (IV) visualization. A volume of interest (VOI) is automatically selected first via a three-dimensional level-set, and then the method uses anisotropic diffusion to address the speckled noise and intensity inhomogeneity correction to eliminate shadowing artifacts before the adaptive distance regularization level set method (DRLSE) conducts segmentation. Finally, the two-dimensional segmented images are reconstructed for visualization in the three-dimensional space. Results The ground truth is delineated by two radiologists with more than 10 years of experience in breast sonography. In this study, three performance assessments are carried out to evaluate the effectiveness of the proposed algorithm. The first assessment is the similarity measurement. The second assessment is the comparison of the results of the proposed algorithm and the Chan-Vese level set method. The third assessment is the volume estimation of phantom cases. In this study, in the 2D validation of the first assessment, the area Dice similarity coefficients of the real cases named cases A, real cases B and phantoms are 0.84±0.02, 0.86±0.03 and 0.92±0.02, respectively. The overlap fraction (OF) and overlap value (OV) of the real cases A are 0.84±0.06 and 0.78±0.04, real case B are 0.91±0.04 and 0.82±0.05, respectively. The overlap fraction (OF) and overlap value (OV) of the phantoms are 0.95±0.02 and 0.92±0.03, respectively. In the 3D validation, the volume Dice similarity coefficients of the real cases A, real cases B and phantoms are 0.85±0.02, 0.89±0.04 and 0.94±0.02, respectively. The overlap fraction (OF) and overlap value (OV) of the real cases A are 0.82±0.06 and 0.79±0.04, real cases B are 0.92±0.04 and 0.85±0.07, respectively. The overlap fraction (OF) and overlap value (OV) of the phantoms are 0.95±0.01 and 0.93±0.04, respectively. Therefore, the proposed algorithm is highly reliable in most cases. In the second assessment, compared with Chan-Vese level set method, the Dice of the proposed algorithm in real cases A, real cases B and phantoms are 0.84±0.02, 0.86±0.03 and 0.92±0.02, respectively. The Dice of Chan-Vese level set in real cases A, real cases B and phantoms are 0.65±0.23, 0.69±0.14 and 0.76±0.14, respectively. The Dice performance of different methods on segmentation shows a highly significant impact (P<0.01). The results show that the proposed algorithm is more accurate than Chan-Vese level set method. In the third assessment, the Spearman's correlation coefficient between the segmented volumes and the corresponding ground truth volumes is ρ=0.929 (P=0.01). Conclusions In summary, the proposed method can batch process ABUS images, segment lesions, calculate their volumes and visualize lesions to facilitate observation by radiologists and physicians.
Collapse
Affiliation(s)
- Chia-Yen Lee
- Department of Electrical Engineering, National United University, Taipei, Taiwan
| | - Tzu-Fang Chang
- Department of Electrical Engineering, National United University, Taipei, Taiwan
| | - Yi-Hong Chou
- Department of Medical Imaging and Radiological Technology, Yuanpei University of Medical Technology, Hsinchu, Taiwan.,Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan.,School of Medicine, National Yang Ming University, Taipei, Taiwan
| | - Kuen-Cheh Yang
- Department of Family Medicine, National Taiwan University Hospital, Bei-Hu Branch, Taipei, Taiwan
| |
Collapse
|
24
|
Tao C, Chen K, Han L, Peng Y, Li C, Hua Z, Lin J. New one-step model of breast tumor locating based on deep learning. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2019; 27:839-856. [PMID: 31306148 DOI: 10.3233/xst-190548] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Chao Tao
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, China
| | - Ke Chen
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, China
| | - Lin Han
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, China
| | - Yulan Peng
- Department of Ultrasound, West China Hospital of Sichuan University, Chengdu, China
| | - Cheng Li
- China-Japan Friendship Hospital, Beijing, China
| | - Zhan Hua
- China-Japan Friendship Hospital, Beijing, China
| | - Jiangli Lin
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, China
| |
Collapse
|
25
|
|
26
|
Collini M, Radaelli F, Sironi L, Ceffa NG, D’Alfonso L, Bouzin M, Chirico G. Adaptive optics microspectrometer for cross-correlation measurement of microfluidic flows. JOURNAL OF BIOMEDICAL OPTICS 2019; 24:1-15. [PMID: 30816029 PMCID: PMC6987636 DOI: 10.1117/1.jbo.24.2.025004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2018] [Accepted: 12/04/2018] [Indexed: 05/17/2023]
Abstract
Mapping flows in vivo is essential for the investigation of cardiovascular pathologies in animal models. The limitation of optical-based methods, such as space-time cross correlation, is the scattering of light by the connective and fat components and the direct wave front distortion by large inhomogeneities in the tissue. Nonlinear excitation of the sample fluorescence helps us by reducing light scattering in excitation. However, there is still a limitation on the signal-background due to the wave front distortion. We develop a diffractive optical microscope based on a single spatial light modulator (SLM) with no movable parts. We combine the correction of wave front distortions to the cross-correlation analysis of the flow dynamics. We use the SLM to shine arbitrary patterns of spots on the sample, to correct their optical aberrations, to shift the aberration corrected spot array on the sample for the collection of fluorescence images, and to measure flow velocities from the cross-correlation functions computed between couples of spots. The setup and the algorithms are tested on various microfluidic devices. By applying the adaptive optics correction algorithm, it is possible to increase up to 5 times the signal-to-background ratio and to reduce approximately of the same ratio the uncertainty of the flow speed measurement. By working on grids of spots, we can correct different aberrations in different portions of the field of view, a feature that allows for anisoplanatic aberrations correction. Finally, being more efficient in the excitation, we increase the accuracy of the speed measurement by employing a larger number of spots in the grid despite the fact that the two-photon excitation efficiency scales as the fourth power of this number: we achieve a twofold decrease of the uncertainty and a threefold increase of the accuracy in the evaluation of the flow speed.
Collapse
Affiliation(s)
- Maddalena Collini
- University of Milano-Bicocca, Department of Physics, Milan, Italy
- University of Milano-Bicocca, Nanomedicine Center, Milan, Italy
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, Pozzuoli, Italy
| | | | - Laura Sironi
- University of Milano-Bicocca, Department of Physics, Milan, Italy
| | - Nicolo G. Ceffa
- University of Milano-Bicocca, Department of Physics, Milan, Italy
| | - Laura D’Alfonso
- University of Milano-Bicocca, Department of Physics, Milan, Italy
| | - Margaux Bouzin
- University of Milano-Bicocca, Department of Physics, Milan, Italy
| | - Giuseppe Chirico
- University of Milano-Bicocca, Department of Physics, Milan, Italy
- University of Milano-Bicocca, Nanomedicine Center, Milan, Italy
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, Pozzuoli, Italy
- Address all correspondence to Giuseppe Chirico, E-mail:
| |
Collapse
|
27
|
Chiang TC, Huang YS, Chen RT, Huang CS, Chang RF. Tumor Detection in Automated Breast Ultrasound Using 3-D CNN and Prioritized Candidate Aggregation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:240-249. [PMID: 30059297 DOI: 10.1109/tmi.2018.2860257] [Citation(s) in RCA: 59] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Automated whole breast ultrasound (ABUS) has been widely used as a screening modality for examination of breast abnormalities. Reviewing hundreds of slices produced by ABUS, however, is time consuming. Therefore, in this paper, a fast and effective computer-aided detection system based on 3-D convolutional neural networks (CNNs) and prioritized candidate aggregation is proposed to accelerate this reviewing. First, an efficient sliding window method is used to extract volumes of interest (VOIs). Then, each VOI is estimated the tumor probability with a 3-D CNN, and VOIs with higher estimated probability are selected as tumor candidates. Since the candidates may overlap each other, a novel scheme is designed to aggregate the overlapped candidates. During the aggregation, candidates are prioritized based on estimated tumor probability to alleviate over-aggregation issue. The relationship between the sizes of VOI and target tumor is optimally exploited to effectively perform each stage of our detection algorithm. On evaluation with a test set of 171 tumors, our method achieved sensitivities of 95% (162/171), 90% (154/171), 85% (145/171), and 80% (137/171) with 14.03, 6.92, 4.91, and 3.62 false positives per patient (with six passes), respectively. In summary, our method is more general and much faster than preliminary works and demonstrates promising results.
Collapse
|
28
|
Lei B, Huang S, Li R, Bian C, Li H, Chou YH, Cheng JZ. Segmentation of breast anatomy for automated whole breast ultrasound images with boundary regularized convolutional encoder–decoder network. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.09.043] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
29
|
Rella R, Belli P, Giuliani M, Bufi E, Carlino G, Rinaldi P, Manfredi R. Automated Breast Ultrasonography (ABUS) in the Screening and Diagnostic Setting: Indications and Practical Use. Acad Radiol 2018; 25:1457-1470. [PMID: 29555568 DOI: 10.1016/j.acra.2018.02.014] [Citation(s) in RCA: 61] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2018] [Revised: 02/10/2018] [Accepted: 02/11/2018] [Indexed: 10/17/2022]
Abstract
Automated breast ultrasonography (ABUS) is a new imaging technology for automatic breast scanning through ultrasound. It was first developed to overcome the limitation of operator dependency and lack of standardization and reproducibility of handheld ultrasound. ABUS provides a three-dimensional representation of breast tissue and allows images reformatting in three planes, and the generated coronal plane has been suggested to improve diagnostic accuracy. This technique has been first used in the screening setting to improve breast cancer detection, especially in mammographically dense breasts. In recent years, numerous studies also evaluated its use in the diagnostic setting: they showed its suitability for breast cancer staging, evaluation of tumor response to neoadjuvant chemotherapy, and second-look ultrasound after magnetic resonance imaging. The purpose of this article is to provide a comprehensive review of the current body of literature about the clinical performance of ABUS, summarize available evidence, and identify gaps in knowledge for future research.
Collapse
|
30
|
Meiburger KM, Acharya UR, Molinari F. Automated localization and segmentation techniques for B-mode ultrasound images: A review. Comput Biol Med 2017; 92:210-235. [PMID: 29247890 DOI: 10.1016/j.compbiomed.2017.11.018] [Citation(s) in RCA: 66] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 11/30/2017] [Accepted: 11/30/2017] [Indexed: 12/14/2022]
Abstract
B-mode ultrasound imaging is used extensively in medicine. Hence, there is a need to have efficient segmentation tools to aid in computer-aided diagnosis, image-guided interventions, and therapy. This paper presents a comprehensive review on automated localization and segmentation techniques for B-mode ultrasound images. The paper first describes the general characteristics of B-mode ultrasound images. Then insight on the localization and segmentation of tissues is provided, both in the case in which the organ/tissue localization provides the final segmentation and in the case in which a two-step segmentation process is needed, due to the desired boundaries being too fine to locate from within the entire ultrasound frame. Subsequenly, examples of some main techniques found in literature are shown, including but not limited to shape priors, superpixel and classification, local pixel statistics, active contours, edge-tracking, dynamic programming, and data mining. Ten selected applications (abdomen/kidney, breast, cardiology, thyroid, liver, vascular, musculoskeletal, obstetrics, gynecology, prostate) are then investigated in depth, and the performances of a few specific applications are compared. In conclusion, future perspectives for B-mode based segmentation, such as the integration of RF information, the employment of higher frequency probes when possible, the focus on completely automatic algorithms, and the increase in available data are discussed.
Collapse
Affiliation(s)
- Kristen M Meiburger
- Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy
| | - U Rajendra Acharya
- Department of Electronic & Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore; Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - Filippo Molinari
- Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy.
| |
Collapse
|
31
|
Gui L, Yang X. Automatic renal lesion segmentation in ultrasound images based on saliency features, improved LBP, and an edge indicator under level set framework. Med Phys 2017; 45:223-235. [PMID: 29131363 DOI: 10.1002/mp.12661] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Revised: 10/09/2017] [Accepted: 10/11/2017] [Indexed: 12/11/2022] Open
Abstract
PURPOSE Segmentation of lesions in ultrasound images is widely used for preliminary diagnosis. In this paper, we develop an automatic segmentation algorithm for multiple types of lesions in ultrasound images. The proposed method is able to detect and segment lesions automatically as well as generate accurate segmentation results for lesion regions. METHODS In the detection step, two saliency detection frameworks which adopt global image information are designed to capture the differences between normal and abnormal organs as well as these between lesions and the normal tissues around them. In the segmentation step, three types of local information, i.e., image intensity, improved local binary patterns (LBP) features, and an edge indicator, are embedded into a modified level set framework to carry out the segmentation task. RESULTS The cyst and carcinoma regions in the ultrasound images of the human kidneys can be automatically detected and segmented by using the proposed method. The efficiency and accuracy of the method are validated by quantitative evaluations and comparative measurements with three well-recognized segmentation methods. Specifically, the average precision and dice coefficient of the proposed method in segmenting renal cysts are 95.33% and 90.16%, respectively, while those in segmenting renal carcinomas are 94.22% and 91.13%, respectively. The average precision and dice coefficient of the proposed method are higher than those of three compared segmentation methods. CONCLUSIONS The proposed method can efficiently detect and segment the renal lesions in ultrasound images. In addition, since the proposed method utilizes the differences between normal and abnormal organs as well as these between lesions and the normal tissues around them, it can be possibly extended to deal with lesions in other organs of ultrasound images as well as lesions in medical images of other modalities.
Collapse
Affiliation(s)
- Luying Gui
- Nanjing University of Science and Technology, Nanjing, Jiangsu, 210094, China
| | | |
Collapse
|
32
|
Xi X, Xu H, Shi H, Zhang C, Ding HY, Zhang G, Tang Y, Yin Y. Robust texture analysis of multi-modal images using Local Structure Preserving Ranklet and multi-task learning for breast tumor diagnosis. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.06.082] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
|
33
|
Kozegar E, Soryani M, Behnam H, Salamati M, Tan T. Breast cancer detection in automated 3D breast ultrasound using iso-contours and cascaded RUSBoosts. ULTRASONICS 2017; 79:68-80. [PMID: 28448836 DOI: 10.1016/j.ultras.2017.04.008] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2016] [Revised: 03/21/2017] [Accepted: 04/18/2017] [Indexed: 06/07/2023]
Abstract
Automated 3D breast ultrasound (ABUS) is a new popular modality as an adjunct to mammography for detecting cancers in women with dense breasts. In this paper, a multi-stage computer aided detection system is proposed to detect cancers in ABUS images. In the first step, an efficient despeckling method called OBNLM is applied on the images to reduce speckle noise. Afterwards, a new algorithm based on isocontours is applied to detect initial candidates as the boundary of masses is hypo echoic. To reduce false generated isocontours, features such as hypoechoicity, roundness, area and contour strength are used. Consequently, the resulted candidates are further processed by a cascade classifier whose base classifiers are Random Under-Sampling Boosting (RUSBoost) that are introduced to deal with imbalanced datasets. Each base classifier is trained on a group of features like Gabor, LBP, GLCM and other features. Performance of the proposed system was evaluated using 104 volumes from 74 patients, including 112 malignant lesions. According to Free Response Operating Characteristic (FROC) analysis, the proposed system achieved the region-based sensitivity and case-based sensitivity of 68% and 76% at one false positive per image.
Collapse
Affiliation(s)
- Ehsan Kozegar
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Mohsen Soryani
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Hamid Behnam
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Masoumeh Salamati
- Department of Reproductive Imaging, Reproductive Biomedicine Research Center, Royan Institute for Reproductive Biomedicine, ACECR, Tehran, Iran
| | - Tao Tan
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen 6525 GA, The Netherlands.
| |
Collapse
|
34
|
Wang X, Guo Y, Wang Y, Yu J. Automatic breast tumor detection in ABVS images based on convolutional neural network and superpixel patterns. Neural Comput Appl 2017. [DOI: 10.1007/s00521-017-3138-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
35
|
Yu Y, Wang J. Enclosure Transform for Interest Point Detection From Speckle Imagery. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:769-780. [PMID: 28114011 DOI: 10.1109/tmi.2016.2636281] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We present a fast enclosure transform (ET) to localize complex objects of interest from speckle imagery. This approach explores the spatial confinement on regional features from a sparse image feature representation. Unrelated, broken ridge features surrounding an object are organized collaboratively, giving rise to the enclosureness of the object. Three enclosure likelihood measures are constructed, consisting of the enclosure force, potential energy, and encloser count. In the transform domain, the local maxima manifest the locations of objects of interest, for which only the intrinsic dimension is known a priori. The discrete ET algorithm is computationally efficient, being on the order of O(MN) using N measuring distances across an image of M ridge pixels. It involves easy and few parameter settings. We demonstrate and assess the performance of ET on the automatic detection of the prostate locations from supra-pubic ultrasound images. ET yields superior results in terms of positive detection rate, accuracy and coverage.
Collapse
|
36
|
Jalalian A, Mashohor S, Mahmud R, Karasfi B, Saripan MIB, Ramli ARB. Foundation and methodologies in computer-aided diagnosis systems for breast cancer detection. EXCLI JOURNAL 2017; 16:113-137. [PMID: 28435432 PMCID: PMC5379115 DOI: 10.17179/excli2016-701] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2016] [Accepted: 01/05/2017] [Indexed: 12/15/2022]
Abstract
Breast cancer is the most prevalent cancer that affects women all over the world. Early detection and treatment of breast cancer could decline the mortality rate. Some issues such as technical reasons, which related to imaging quality and human error, increase misdiagnosis of breast cancer by radiologists. Computer-aided detection systems (CADs) are developed to overcome these restrictions and have been studied in many imaging modalities for breast cancer detection in recent years. The CAD systems improve radiologists' performance in finding and discriminating between the normal and abnormal tissues. These procedures are performed only as a double reader but the absolute decisions are still made by the radiologist. In this study, the recent CAD systems for breast cancer detection on different modalities such as mammography, ultrasound, MRI, and biopsy histopathological images are introduced. The foundation of CAD systems generally consist of four stages: Pre-processing, Segmentation, Feature extraction, and Classification. The approaches which applied to design different stages of CAD system are summarised. Advantages and disadvantages of different segmentation, feature extraction and classification techniques are listed. In addition, the impact of imbalanced datasets in classification outcomes and appropriate methods to solve these issues are discussed. As well as, performance evaluation metrics for various stages of breast cancer detection CAD systems are reviewed.
Collapse
Affiliation(s)
- Afsaneh Jalalian
- Department of Computer and Communication Systems Engineering, Faculty of Engineering, Universiti Putra, Malaysia
| | - Syamsiah Mashohor
- Department of Computer and Communication Systems Engineering, Faculty of Engineering, Universiti Putra, Malaysia
| | - Rozi Mahmud
- Department of Imaging, Faculty of Medicine and Health Science, Universiti Putra, Malaysia
| | - Babak Karasfi
- Department of Computer Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran
| | - M. Iqbal B. Saripan
- Department of Computer and Communication Systems Engineering, Faculty of Engineering, Universiti Putra, Malaysia
| | - Abdul Rahman B. Ramli
- Department of Computer and Communication Systems Engineering, Faculty of Engineering, Universiti Putra, Malaysia
| |
Collapse
|
37
|
Meel-van den Abeelen ASS, Weijers G, van Zelst JCM, Thijssen JM, Mann RM, de Korte CL. 3D quantitative breast ultrasound analysis for differentiating fibroadenomas and carcinomas smaller than 1cm. Eur J Radiol 2017; 88:141-147. [PMID: 28189199 DOI: 10.1016/j.ejrad.2017.01.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2016] [Revised: 09/02/2016] [Accepted: 01/05/2017] [Indexed: 11/15/2022]
Abstract
PURPOSE In (3D) ultrasound, accurate discrimination of small solid masses is difficult, resulting in a high frequency of biopsies for benign lesions. In this study, we investigate whether 3D quantitative breast ultrasound (3DQBUS) analysis can be used for improving non-invasive discrimination between benign and malignant lesions. METHODS AND MATERIALS 3D US studies of 112 biopsied solid breast lesions (size <1cm), were included (34 fibroadenomas and 78 invasive ductal carcinomas). The lesions were manually delineated and, based on sonographic criteria used by radiologists, 3 regions of interest were defined in 3D for analysis: ROI (ellipsoid covering the inside of the lesion), PER (peritumoural surrounding: 0.5mm around the lesion), and POS (posterior-tumoural acoustic phenomena: region below the lesion with the same size as delineated for the lesion). After automatic gain correction (AGC), the mean and standard deviation of the echo level within the regions were calculated. For the ROI and POS also the residual attenuation coefficient was estimated in decibel per cm [dB/cm]. The resulting eight features were used for classification of the lesions by a logistic regression analysis. The classification accuracy was evaluated by leave-one-out cross-validation. Receiver operating characteristic (ROC) curves were constructed to assess the performance of the classification. All lesions were delineated by two readers and results were compared to assess the effect of the manual delineation. RESULTS The area under the ROC curve was 0.86 for both readers. At 100% sensitivity, a specificity of 26% and 50% was achieved for reader 1 and 2, respectively. Inter-reader variability in lesion delineation was marginal and did not affect the accuracy of the technique. The area under the ROC curve of 0.86 was reached for the second reader when the results of the first reader were used as training set yielding a sensitivity of 100% and a specificity of 40%. Consequently, 3DQBUS would have achieved a 40% reduction in biopsies for benign lesions for reader 2, without a decrease in sensitivity. CONCLUSION This study shows that 3DQBUS is a promising technique to classify suspicious breast lesions as benign, potentially preventing unnecessary biopsies.
Collapse
Affiliation(s)
- A S S Meel-van den Abeelen
- Department of Biomechanical Engineering, MIRA-Institute, University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands; Medical UltraSound Imaging Center (MUSIC), department of Radiology and Nuclear Medicine, Radboud University Medical Center, P.O. Box 9101, 6500 HB Nijmegen, The Netherlands.
| | - G Weijers
- Medical UltraSound Imaging Center (MUSIC), department of Radiology and Nuclear Medicine, Radboud University Medical Center, P.O. Box 9101, 6500 HB Nijmegen, The Netherlands
| | - J C M van Zelst
- Radboud University Nijmegen Medical Centre, Department of Radiology and Nuclear Medicine, PO Box 9101, 6500 HB Nijmegen, The Netherlands
| | - J M Thijssen
- Medical UltraSound Imaging Center (MUSIC), department of Radiology and Nuclear Medicine, Radboud University Medical Center, P.O. Box 9101, 6500 HB Nijmegen, The Netherlands
| | - R M Mann
- Radboud University Nijmegen Medical Centre, Department of Radiology and Nuclear Medicine, PO Box 9101, 6500 HB Nijmegen, The Netherlands
| | - C L de Korte
- Medical UltraSound Imaging Center (MUSIC), department of Radiology and Nuclear Medicine, Radboud University Medical Center, P.O. Box 9101, 6500 HB Nijmegen, The Netherlands
| |
Collapse
|
38
|
Srivastava R, Duan L, Wong DWK, Liu J, Wong TY. Detecting retinal microaneurysms and hemorrhages with robustness to the presence of blood vessels. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 138:83-91. [PMID: 27886718 DOI: 10.1016/j.cmpb.2016.10.017] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2016] [Revised: 09/05/2016] [Accepted: 10/18/2016] [Indexed: 06/06/2023]
Abstract
BACKGROUND AND OBJECTIVES Diabetic Retinopathy is the leading cause of blindness in developed countries in the age group 20-74 years. It is characterized by lesions on the retina and this paper focuses on detecting two of these lesions, Microaneurysms and Hemorrhages, which are also known as red lesions. This paper attempts to deal with two problems in detecting red lesions from retinal fundus images: (1) false detections on blood vessels; and (2) different size of red lesions. METHODS To deal with false detections on blood vessels, novel filters have been proposed which can distinguish between red lesions and blood vessels. This distinction is based on the fact that vessels are elongated while red lesions are usually circular blob-like structures. The second problem of the different size of lesions is dealt with by applying the proposed filters on patches of different sizes instead of filtering the full image. These patches are obtained by dividing the original image using a grid whose size determines the patch size. Different grid sizes were used and lesion detection results for these grid sizes were combined using Multiple Kernel Learning. RESULTS Experiments on a dataset of 143 images showed that proposed filters detected Microaneurysms and Hemorrhages successfully even when these lesions were close to blood vessels. In addition, using Multiple Kernel Learning improved the results when compared to using a grid of one size only. The areas under receiver operating characteristic curve were found to be 0.97 and 0.92 for Microaneurysms and Hemorrhages respectively which are better than the existing related works. CONCLUSIONS Proposed filters are robust to the presence of blood vessels and surpass related works in detecting red lesions from retinal fundus images. Improved lesion detection using the proposed approach can help in automatic detection of Diabetic Retinopathy.
Collapse
Affiliation(s)
| | - Lixin Duan
- Institute for Infocomm Research, Singapore 138632
| | | | - Jiang Liu
- Institute for Infocomm Research, Singapore 138632
| | | |
Collapse
|
39
|
Zhang M, Wu T, Beeman SC, Cullen-McEwen L, Bertram JF, Charlton JR, Baldelomar E, Bennett KM. Efficient Small Blob Detection Based on Local Convexity, Intensity and Shape Information. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1127-1137. [PMID: 26685229 PMCID: PMC6991892 DOI: 10.1109/tmi.2015.2509463] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The identification of small structures (blobs) from medical images to quantify clinically relevant features, such as size and shape, is important in many medical applications. One particular application explored here is the automated detection of kidney glomeruli after targeted contrast enhancement and magnetic resonance imaging. We propose a computationally efficient algorithm, termed the Hessian-based Difference of Gaussians (HDoG), to segment small blobs (e.g., glomeruli from kidney) from 3D medical images based on local convexity, intensity and shape information. The image is first smoothed and pre-segmented into small blob candidate regions based on local convexity. Two novel 3D regional features (regional blobness and regional flatness) are then extracted from the candidate regions. Together with regional intensity, the three features are used in an unsupervised learning algorithm for auto post-pruning. HDoG is first validated in a 2D form and compared with other three blob detectors from literature, which are generally for 2D images only. To test the detectability of blobs from 3D images, 240 sets of simulated images are rendered for scenarios mimicking the renal nephron distribution observed in contrast-enhanced, 3D MRI. The results show a satisfactory performance of HDoG in detecting large numbers of small blobs. Two sets of real kidney 3D MR images (6 rats, 3 human) are then used to validate the applicability of HDoG for glomeruli detection. By comparing MRI to stereological measurements, we verify that HDoG is a robust and efficient unsupervised technique for 3D blobs segmentation.
Collapse
|
40
|
Liang X, Lin L, Cao Q, Huang R, Wang Y. Recognizing Focal Liver Lesions in CEUS With Dynamically Trained Latent Structured Models. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:713-27. [PMID: 26513779 DOI: 10.1109/tmi.2015.2492618] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
This work investigates how to automatically classify Focal Liver Lesions (FLLs) into three specific benign or malignant types in Contrast-Enhanced Ultrasound (CEUS) videos, and aims at providing a computational framework to assist clinicians in FLL diagnosis. The main challenge for this task is that FLLs in CEUS videos often show diverse enhancement patterns at different temporal phases. To handle these diverse patterns, we propose a novel structured model, which detects a number of discriminative Regions of Interest (ROIs) for the FLL and recognize the FLL based on these ROIs. Our model incorporates an ensemble of local classifiers in the attempt to identify different enhancement patterns of ROIs, and in particular, we make the model reconfigurable by introducing switch variables to adaptively select appropriate classifiers during inference. We formulate the model learning as a non-convex optimization problem, and present a principled optimization method to solve it in a dynamic manner: the latent structures (e.g. the selections of local classifiers, and the sizes and locations of ROIs) are iteratively determined along with the parameter learning. Given the updated model parameters in each step, the data-driven inference is also proposed to efficiently determine the latent structures by using the sequential pruning and dynamic programming method. In the experiments, we demonstrate superior performances over the state-of-the-art approaches. We also release hundreds of CEUS FLLs videos used to quantitatively evaluate this work, which to the best of our knowledge forms the largest dataset in the literature. Please find more information at "http://vision.sysu.edu.cn/projects/fllrecog/".
Collapse
|
41
|
Deng Y, Liu W, Jago J. A hierarchical model for automated breast lesion detection from ultrasound 3D data. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:145-8. [PMID: 26736221 DOI: 10.1109/embc.2015.7318321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Ultrasound imaging plays an important role in breast cancer screening for which early and accurate lesion detection is crucial for clinical practice. Many researches were performed on supporting the breast lesion detection based on ultrasound data. In the paper, a novel hierarchical model is proposed to automatically detect breast lesion from ultrasound 3D data. The model simultaneously considers the data information from low-level to high-level for the detection by processing with a joint probability. For each layer of the model, the corresponding algorithm is performed to denote the certain level image information. A dynamic programming approach is applied to efficiently obtain the optimal solution. With a preliminary dataset, the superior performance of the proposed model has been demonstrated for the automated detection of breast lesion with 0.375 false positive per case at 91.7% sensitivity.
Collapse
|
42
|
Song J, Yang C, Fan L, Wang K, Yang F, Liu S, Tian J. Lung Lesion Extraction Using a Toboggan Based Growing Automatic Segmentation Approach. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:337-353. [PMID: 26336121 DOI: 10.1109/tmi.2015.2474119] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The accurate segmentation of lung lesions from computed tomography (CT) scans is important for lung cancer research and can offer valuable information for clinical diagnosis and treatment. However, it is challenging to achieve a fully automatic lesion detection and segmentation with acceptable accuracy due to the heterogeneity of lung lesions. Here, we propose a novel toboggan based growing automatic segmentation approach (TBGA) with a three-step framework, which are automatic initial seed point selection, multi-constraints 3D lesion extraction and the final lesion refinement. The new approach does not require any human interaction or training dataset for lesion detection, yet it can provide a high lesion detection sensitivity (96.35%) and a comparable segmentation accuracy with manual segmentation (P > 0.05), which was proved by a series assessments using the LIDC-IDRI dataset (850 lesions) and in-house clinical dataset (121 lesions). We also compared TBGA with commonly used level set and skeleton graph cut methods, respectively. The results indicated a significant improvement of segmentation accuracy . Furthermore, the average time consumption for one lesion segmentation was under 8 s using our new method. In conclusion, we believe that the novel TBGA can achieve robust, efficient and accurate lung lesion segmentation in CT images automatically.
Collapse
|
43
|
Zhang M, Wu T, Bennett KM. Small blob identification in medical images using regional features from optimum scale. IEEE Trans Biomed Eng 2015; 62:1051-62. [PMID: 25265624 DOI: 10.1109/tbme.2014.2360154] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recent advances in medical imaging technology have greatly enhanced imaging-based diagnosis which requires computational effective and accurate algorithms to process the images (e.g., measure the objects) for quantitative assessment. In this research, we are interested in one type of imaging objects: small blobs. Examples of small blob objects are cells in histopathology images, glomeruli in MR images, etc. This problem is particularly challenging because the small blobs often have in homogeneous intensity distribution and an indistinct boundary against the background. Yet, in general, these blobs have similar sizes. Motivated by this finding, we propose a novel detector termed Hessian-based Laplacian of Gaussian (HLoG) using scale space theory as the foundation. Like most imaging detectors, an image is first smoothed via LoG. Hessian analysis is then launched to identify the single optimal scale on which a presegmentation is conducted. The advantage of the Hessian process is that it is capable of delineating the blobs. As a result, regional features can be retrieved. These features enable an unsupervised clustering algorithm for postpruning which should be more robust and sensitive than the traditional threshold-based postpruning commonly used in most imaging detectors. To test the performance of the proposed HLoG, two sets of 2-D grey medical images are studied. HLoG is compared against three state-of-the-art detectors: generalized LoG, Radial-Symmetry and LoG using precision, recall, and F-score metrics.We observe that HLoG statistically outperforms the compared detectors.
Collapse
|
44
|
Tan T, Mordang JJ, van Zelst J, Grivegnée A, Gubern-Mérida A, Melendez J, Mann RM, Zhang W, Platel B, Karssemeijer N. Computer-aided detection of breast cancers using Haar-like features in automated 3D breast ultrasound. Med Phys 2015; 42:1498-504. [DOI: 10.1118/1.4914162] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
45
|
Wu J, Wang Y, Yu J, Shi X, Zhang J, Chen Y, Pang Y. Intelligent speckle reducing anisotropic diffusion algorithm for automated 3-D ultrasound images. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2015; 32:248-257. [PMID: 26366596 DOI: 10.1364/josaa.32.000248] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
A novel 3-D filtering method is presented for speckle reduction and detail preservation in automated 3-D ultrasound images. First, texture features of an image are analyzed by using the improved quadtree (QT) decomposition. Then, the optimal homogeneous and the obvious heterogeneous regions are selected from QT decomposition results. Finally, diffusion parameters and diffusion process are automatically decided based on the properties of these two selected regions. The computing time needed for 2-D speckle reduction is very short. However, the computing time required for 3-D speckle reduction is often hundreds of times longer than 2-D speckle reduction. This may limit its potential application in practice. Because this new filter can adaptively adjust the time step of iteration, the computation time is reduced effectively. Both synthetic and real 3-D ultrasound images are used to evaluate the proposed filter. It is shown that this filter is superior to other methods in both practicality and efficiency.
Collapse
|
46
|
Ye C, Vaidya V, Zhao F. Improved mass detection in 3D automated breast ultrasound using region based features and multi-view information. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:2865-8. [PMID: 25570589 DOI: 10.1109/embc.2014.6944221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Breast cancer is one of the leading causes of cancer death for women. Early detection of breast cancer is crucial for reducing mortality rates and improving prognosis of patients. Recently, 3D automated breast ultrasound (ABUS) has gained increasing attentions for reducing subjectivity, operator-dependence, and providing 3D context of the whole breast. In this work, we propose a breast mass detection algorithm improving voxel-based detection results by incorporating 3D region-based features and multi-view information in 3D ABUS images. Based on the candidate mass regions produced by voxel-based method, our proposed approach further improves the detection results with three major steps: 1) 3D mass segmentation in geodesic active contours framework with edge points obtained from directional searching; 2) region-based single-view and multi-view feature extraction; 3) support vector machine (SVM) classification to discriminate candidate regions as breast masses or normal background tissues. 22 patients including 51 3D ABUS volumes with 44 breast masses were used for evaluation. The proposed approach reached sensitivities of 95%, 90%, and 70% with averaged 4.3, 3.8, and 1.6 false positives per volume, respectively. The results also indicate that the multi-view information plays an important role in false positive reduction in 3D breast mass detection.
Collapse
|
47
|
An Approach to a Laser-Touchscreen System. ENTERP INF SYST-UK 2015. [DOI: 10.1007/978-3-319-29133-8_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
48
|
Drukker K, Sennett CA, Giger ML. Computerized detection of breast cancer on automated breast ultrasound imaging of women with dense breasts. Med Phys 2014; 41:012901. [PMID: 24387528 DOI: 10.1118/1.4837196] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022] Open
Abstract
PURPOSE Develop a computer-aided detection method and investigate its feasibility for detection of breast cancer in automated 3D ultrasound images of women with dense breasts. METHODS The HIPAA compliant study involved a dataset of volumetric ultrasound image data, "views," acquired with an automated U-Systems Somo●V(®) ABUS system for 185 asymptomatic women with dense breasts (BI-RADS Composition/Density 3 or 4). For each patient, three whole-breast views (3D image volumes) per breast were acquired. A total of 52 patients had breast cancer (61 cancers), diagnosed through any follow-up at most 365 days after the original screening mammogram. Thirty-one of these patients (32 cancers) had a screening-mammogram with a clinically assigned BI-RADS Assessment Category 1 or 2, i.e., were mammographically negative. All software used for analysis was developed in-house and involved 3 steps: (1) detection of initial tumor candidates, (2) characterization of candidates, and (3) elimination of false-positive candidates. Performance was assessed by calculating the cancer detection sensitivity as a function of the number of "marks" (detections) per view. RESULTS At a single mark per view, i.e., six marks per patient, the median detection sensitivity by cancer was 50.0% (16/32) ± 6% for patients with a screening mammogram-assigned BI-RADS category 1 or 2--similar to radiologists' performance sensitivity (49.9%) for this dataset from a prior reader study--and 45.9% (28/61) ± 4% for all patients. CONCLUSIONS Promising detection sensitivity was obtained for the computer on a 3D ultrasound dataset of women with dense breasts at a rate of false-positive detections that may be acceptable for clinical implementation.
Collapse
Affiliation(s)
- Karen Drukker
- Department of Radiology, MC2026, The University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637
| | - Charlene A Sennett
- Department of Radiology, MC2026, The University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637
| | - Maryellen L Giger
- Department of Radiology, MC2026, The University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637
| |
Collapse
|
49
|
Lo CM, Chen RT, Chang YC, Yang YW, Hung MJ, Huang CS, Chang RF. Multi-dimensional tumor detection in automated whole breast ultrasound using topographic watershed. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:1503-1511. [PMID: 24718570 DOI: 10.1109/tmi.2014.2315206] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Automated whole breast ultrasound (ABUS) is becoming a popular screening modality for whole breast examination. Compared to conventional handheld ultrasound, ABUS achieves operator-independent and is feasible for mass screening. However, reviewing hundreds of slices in an ABUS image volume is time-consuming. A computer-aided detection (CADe) system based on watershed transform was proposed in this study to accelerate the reviewing. The watershed transform was applied to gather similar tissues around local minima to be homogeneous regions. The likelihoods of being tumors of the regions were estimated using the quantitative morphology, intensity, and texture features in the 2-D/3-D false positive reduction (FPR). The collected database comprised 68 benign and 65 malignant tumors. As a result, the proposed system achieved sensitivities of 100% (133/133), 90% (121/133), and 80% (107/133) with FPs/pass of 9.44, 5.42, and 3.33, respectively. The figure of merit of the combination of three feature sets is 0.46 which is significantly better than that of other feature sets ( [Formula: see text]). In summary, the proposed CADe system based on the multi-dimensional FPR using the integrated feature set is promising in detecting tumors in ABUS images.
Collapse
|
50
|
|