1
|
Wan P, Xue H, Zhang S, Kong W, Shao W, Wen B, Zhang D. Image by co-reasoning: A collaborative reasoning-based implicit data augmentation method for dual-view CEUS classification. Med Image Anal 2025; 102:103557. [PMID: 40174326 DOI: 10.1016/j.media.2025.103557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2024] [Revised: 02/13/2025] [Accepted: 03/14/2025] [Indexed: 04/04/2025]
Abstract
Dual-view contrast-enhanced ultrasound (CEUS) data are often insufficient to train reliable machine learning models in typical clinical scenarios. A key issue is that limited clinical CEUS data fail to cover the underlying texture variations for specific diseases. Implicit data augmentation offers a flexible way to enrich sample diversity, however, inter-view semantic consistency has not been considered in previous studies. To address this issue, we propose a novel implicit data augmentation method for dual-view CEUS classification, which performs a sample-adaptive data augmentation with collaborative semantic reasoning across views. Specifically, the method constructs a feature augmentation distribution for each ultrasound view of an individual sample, accounting for intra-class variance. To maintain semantic consistency between the augmented views, plausible semantic changes in one view are transferred from similar instances in the other view. In this retrospective study, we validate the proposed method on the dual-view CEUS datasets of breast cancer and liver cancer, obtaining the superior mean diagnostic accuracy of 89.25% and 95.57%, respectively. Experimental results demonstrate its effectiveness in improving model performance with limited clinical CEUS data. Code: https://github.com/wanpeng16/CRIDA.
Collapse
Affiliation(s)
- Peng Wan
- College of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing 211106, China
| | - Haiyan Xue
- Department of ultrasound, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China
| | - Shukang Zhang
- College of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing 211106, China
| | - Wentao Kong
- Department of ultrasound, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China
| | - Wei Shao
- College of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing 211106, China.
| | - Baojie Wen
- Department of ultrasound, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China; Medical Imaging Center, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China; Nanjing University Institute of Medical Imaging and Artificial Intelligence, Nanjing, 210093, Jiangsu, China.
| | - Daoqiang Zhang
- College of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing 211106, China.
| |
Collapse
|
2
|
Harmanani M, Wilson PFR, To MNN, Gilany M, Jamzad A, Fooladgar F, Wodlinger B, Abolmaesumi P, Mousavi P. TRUSWorthy: toward clinically applicable deep learning for confident detection of prostate cancer in micro-ultrasound. Int J Comput Assist Radiol Surg 2025; 20:981-989. [PMID: 39976857 DOI: 10.1007/s11548-025-03335-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Accepted: 02/03/2025] [Indexed: 05/07/2025]
Abstract
PURPOSE While deep learning methods have shown great promise in improving the effectiveness of prostate cancer (PCa) diagnosis by detecting suspicious lesions from trans-rectal ultrasound (TRUS), they must overcome multiple simultaneous challenges. There is high heterogeneity in tissue appearance, significant class imbalance in favor of benign examples, and scarcity in the number and quality of ground truth annotations available to train models. Failure to address even a single one of these problems can result in unacceptable clinical outcomes. METHODS We propose TRUSWorthy, a carefully designed, tuned, and integrated system for reliable PCa detection. Our pipeline integrates self-supervised learning, multiple-instance learning aggregation using transformers, random-undersampled boosting and ensembling: These address label scarcity, weak labels, class imbalance, and overconfidence, respectively. We train and rigorously evaluate our method using a large, multi-center dataset of micro-ultrasound data. RESULTS Our method outperforms previous state-of-the-art deep learning methods in terms of accuracy and uncertainty calibration, with AUROC and balanced accuracy scores of 79.9% and 71.5%, respectively. On the top 20% of predictions with the highest confidence, we can achieve a balanced accuracy of up to 91%. CONCLUSION The success of TRUSWorthy demonstrates the potential of integrated deep learning solutions to meet clinical needs in a highly challenging deployment setting, and is a significant step toward creating a trustworthy system for computer-assisted PCa diagnosis.
Collapse
Affiliation(s)
- Mohamed Harmanani
- Queen's University, Kingston, Canada.
- Vector Institute, Toronto, Canada.
| | - Paul F R Wilson
- Queen's University, Kingston, Canada
- Vector Institute, Toronto, Canada
| | - Minh Nguyen Nhat To
- University of British Columbia, Vancouver, Canada
- Vector Institute, Toronto, Canada
| | - Mahdi Gilany
- Queen's University, Kingston, Canada
- Vector Institute, Toronto, Canada
| | - Amoon Jamzad
- Queen's University, Kingston, Canada
- Vector Institute, Toronto, Canada
| | - Fahimeh Fooladgar
- University of British Columbia, Vancouver, Canada
- Vector Institute, Toronto, Canada
| | | | | | - Parvin Mousavi
- Queen's University, Kingston, Canada
- Vector Institute, Toronto, Canada
| |
Collapse
|
3
|
Chi J, Chen JH, Wu B, Zhao J, Wang K, Yu X, Zhang W, Huang Y. A Dual-Branch Cross-Modality-Attention Network for Thyroid Nodule Diagnosis Based on Ultrasound Images and Contrast-Enhanced Ultrasound Videos. IEEE J Biomed Health Inform 2025; 29:1269-1282. [PMID: 39356606 DOI: 10.1109/jbhi.2024.3472609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2024]
Abstract
Contrast-enhanced ultrasound (CEUS) has been extensively employed as an imaging modality in thyroid nodule diagnosis due to its capacity to visualise the distribution and circulation of micro-vessels in organs and lesions in a non-invasive manner. However, current CEUS-based thyroid nodule diagnosis methods suffered from: 1) the blurred spatial boundaries between nodules and other anatomies in CEUS videos, and 2) the insufficient representations of the local structural information of nodule tissues by the features extracted only from CEUS videos. In this paper, we propose a novel dual-branch network with a cross-modality-attention mechanism for thyroid nodule diagnosis by integrating the information from tow related modalities, i.e., CEUS videos and ultrasound image. The mechanism has two parts: US-attention-from-CEUS transformer (UAC-T) and CEUS-attention-from-US transformer (CAU-T). As such, this network imitates the manner of human radiologists by decomposing the diagnosis into two correlated tasks: 1) the spatio-temporal features extracted from CEUS are hierarchically embedded into the spatial features extracted from US with UAC-T for the nodule segmentation; 2) the US spatial features are used to guide the extraction of the CEUS spatio-temporal features with CAU-T for the nodule classification. The two tasks are intertwined in the dual-branch end-to-end network and optimized with the multi-task learning (MTL) strategy. The proposed method is evaluated on our collected thyroid US-CEUS dataset. Experimental results show that our method achieves the classification accuracy of 86.92%, specificity of 66.41%, and sensitivity of 97.01%, outperforming the state-of-the-art methods. As a general contribution in the field of multi-modality diagnosis of diseases, the proposed method has provided an effective way to combine static information with its related dynamic information, improving the quality of deep learning based diagnosis with an additional benefit of explainability.
Collapse
|
4
|
Wang H, Wu H, Wang Z, Yue P, Ni D, Heng PA, Wang Y. A Narrative Review of Image Processing Techniques Related to Prostate Ultrasound. ULTRASOUND IN MEDICINE & BIOLOGY 2025; 51:189-209. [PMID: 39551652 DOI: 10.1016/j.ultrasmedbio.2024.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 09/15/2024] [Accepted: 10/06/2024] [Indexed: 11/19/2024]
Abstract
Prostate cancer (PCa) poses a significant threat to men's health, with early diagnosis being crucial for improving prognosis and reducing mortality rates. Transrectal ultrasound (TRUS) plays a vital role in the diagnosis and image-guided intervention of PCa. To facilitate physicians with more accurate and efficient computer-assisted diagnosis and interventions, many image processing algorithms in TRUS have been proposed and achieved state-of-the-art performance in several tasks, including prostate gland segmentation, prostate image registration, PCa classification and detection and interventional needle detection. The rapid development of these algorithms over the past 2 decades necessitates a comprehensive summary. As a consequence, this survey provides a narrative review of this field, outlining the evolution of image processing methods in the context of TRUS image analysis and meanwhile highlighting their relevant contributions. Furthermore, this survey discusses current challenges and suggests future research directions to possibly advance this field further.
Collapse
Affiliation(s)
- Haiqiao Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Hong Wu
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Zhuoyuan Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Peiyan Yue
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Dong Ni
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yi Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China.
| |
Collapse
|
5
|
Rai HM, Yoo J, Dashkevych S. Transformative Advances in AI for Precise Cancer Detection: A Comprehensive Review of Non-Invasive Techniques. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING 2025. [DOI: 10.1007/s11831-024-10219-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 12/07/2024] [Indexed: 03/02/2025]
|
6
|
Rai HM, Yoo J, Razaque A. Comparative analysis of machine learning and deep learning models for improved cancer detection: A comprehensive review of recent advancements in diagnostic techniques. EXPERT SYSTEMS WITH APPLICATIONS 2024; 255:124838. [DOI: 10.1016/j.eswa.2024.124838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
|
7
|
Alhassan AM. Identification and Localization of Indolent and Aggressive Prostate Cancers Using Multilevel Bi-LSTM. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1591-1608. [PMID: 38448760 PMCID: PMC11300760 DOI: 10.1007/s10278-024-01030-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Revised: 01/20/2024] [Accepted: 01/22/2024] [Indexed: 03/08/2024]
Abstract
Identifying indolent and aggressive prostate cancers is a critical problem for optimal treatment. The existing approaches of prostate cancer detection are facing challenges as the techniques rely on ground truth labels with limited accuracy, and histological similarity, and do not consider the disease pathology characteristics, and indefinite differences in appearance between the cancerous and healthy tissue lead to many false positive and false negative interpretations. Hence, this research introduces a comprehensive framework designed to achieve accurate identification and localization of prostate cancers, irrespective of their aggressiveness. This is accomplished through the utilization of a sophisticated multilevel bidirectional long short-term memory (Bi-LSTM) model. The pre-processed images are subjected to multilevel feature map-based U-Net segmentation, bolstered by ResNet-101 and a channel-based attention module that improves the performance. Subsequently, segmented images undergo feature extraction, encompassing various feature types, including statistical features, a global hybrid-based feature map, and a ResNet-101 feature map that enhances the detection accuracy. The extracted features are fed to the multilevel Bi-LSTM model, further optimized through channel and spatial attention mechanisms that offer the effective localization and recognition of complex structures of cancer. Further, the framework represents a promising approach for enhancing the diagnosis and localization of prostate cancers, encompassing both indolent and aggressive cases. Rigorous testing on a distinct dataset demonstrates the model's effectiveness, with performance evaluated through key metrics which are reported as 96.72%, 96.17%, and 96.17% for accuracy, sensitivity, and specificity respectively utilizing the dataset 1. For dataset 2, the model achieves the accuracy, sensitivity, and specificity values of 94.41%, 93.10%, and 94.96% respectively. These results surpass the efficiency of alternative methods.
Collapse
Affiliation(s)
- Afnan M Alhassan
- College of Computing and Information Technology, Shaqra University, 11961, Shaqra, Saudi Arabia.
| |
Collapse
|
8
|
To MNN, Fooladgar F, Wilson P, Harmanani M, Gilany M, Sojoudi S, Jamzad A, Chang S, Black P, Mousavi P, Abolmaesumi P. LensePro: label noise-tolerant prototype-based network for improving cancer detection in prostate ultrasound with limited annotations. Int J Comput Assist Radiol Surg 2024; 19:1121-1128. [PMID: 38598142 DOI: 10.1007/s11548-024-03104-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Accepted: 03/04/2024] [Indexed: 04/11/2024]
Abstract
PURPOSE The standard of care for prostate cancer (PCa) diagnosis is the histopathological analysis of tissue samples obtained via transrectal ultrasound (TRUS) guided biopsy. Models built with deep neural networks (DNNs) hold the potential for direct PCa detection from TRUS, which allows targeted biopsy and subsequently enhances outcomes. Yet, there are ongoing challenges with training robust models, stemming from issues such as noisy labels, out-of-distribution (OOD) data, and limited labeled data. METHODS This study presents LensePro, a unified method that not only excels in label efficiency but also demonstrates robustness against label noise and OOD data. LensePro comprises two key stages: first, self-supervised learning to extract high-quality feature representations from abundant unlabeled TRUS data and, second, label noise-tolerant prototype-based learning to classify the extracted features. RESULTS Using data from 124 patients who underwent systematic prostate biopsy, LensePro achieves an AUROC, sensitivity, and specificity of 77.9%, 85.9%, and 57.5%, respectively, for detecting PCa in ultrasound. Our model shows it is effective for detecting OOD data in test time, critical for clinical deployment. Ablation studies demonstrate that each component of our method improves PCa detection by addressing one of the three challenges, reinforcing the benefits of a unified approach. CONCLUSION Through comprehensive experiments, LensePro demonstrates its state-of-the-art performance for TRUS-based PCa detection. Although further research is necessary to confirm its clinical applicability, LensePro marks a notable advancement in enhancing automated computer-aided systems for detecting prostate cancer in ultrasound.
Collapse
Affiliation(s)
- Minh Nguyen Nhat To
- Electrical and Computer Engineering, University of British Columbia, Vancouver, Canada.
| | - Fahimeh Fooladgar
- Electrical and Computer Engineering, University of British Columbia, Vancouver, Canada
| | - Paul Wilson
- School of Computing, Queen's University, Kingston, Canada
| | | | - Mahdi Gilany
- School of Computing, Queen's University, Kingston, Canada
| | - Samira Sojoudi
- Electrical and Computer Engineering, University of British Columbia, Vancouver, Canada
| | - Amoon Jamzad
- School of Computing, Queen's University, Kingston, Canada
| | - Silvia Chang
- Electrical and Computer Engineering, University of British Columbia, Vancouver, Canada
| | - Peter Black
- Electrical and Computer Engineering, University of British Columbia, Vancouver, Canada
| | - Parvin Mousavi
- School of Computing, Queen's University, Kingston, Canada.
| | - Purang Abolmaesumi
- Electrical and Computer Engineering, University of British Columbia, Vancouver, Canada.
| |
Collapse
|
9
|
Wilson PFR, Harmanani M, To MNN, Gilany M, Jamzad A, Fooladgar F, Wodlinger B, Abolmaesumi P, Mousavi P. Toward confident prostate cancer detection using ultrasound: a multi-center study. Int J Comput Assist Radiol Surg 2024; 19:841-849. [PMID: 38704793 DOI: 10.1007/s11548-024-03119-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 03/21/2024] [Indexed: 05/07/2024]
Abstract
PURPOSE Deep learning-based analysis of micro-ultrasound images to detect cancerous lesions is a promising tool for improving prostate cancer (PCa) diagnosis. An ideal model should confidently identify cancer while responding with appropriate uncertainty when presented with out-of-distribution inputs that arise during deployment due to imaging artifacts and the biological heterogeneity of patients and prostatic tissue. METHODS Using micro-ultrasound data from 693 patients across 5 clinical centers who underwent micro-ultrasound guided prostate biopsy, we train and evaluate convolutional neural network models for PCa detection. To improve robustness to out-of-distribution inputs, we employ and comprehensively benchmark several state-of-the-art uncertainty estimation methods. RESULTS PCa detection models achieve performance scores up to 76 % average AUROC with a 10-fold cross validation setup. Models with uncertainty estimation obtain expected calibration error scores as low as 2 % , indicating that confident predictions are very likely to be correct. Visualizations of the model output demonstrate that the model correctly identifies healthy versus malignant tissue. CONCLUSION Deep learning models have been developed to confidently detect PCa lesions from micro-ultrasound. The performance of these models, determined from a large and diverse dataset, is competitive with visual analysis of magnetic resonance imaging, the clinical benchmark to identify PCa lesions for targeted biopsy. Deep learning with micro-ultrasound should be further studied as an avenue for targeted prostate biopsy.
Collapse
Affiliation(s)
| | | | - Minh Nguyen Nhat To
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, Canada
| | - Mahdi Gilany
- School of Computing, Queen's University, Kingston, Canada
| | - Amoon Jamzad
- School of Computing, Queen's University, Kingston, Canada
| | - Fahimeh Fooladgar
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, Canada
| | | | - Purang Abolmaesumi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, Canada
| | - Parvin Mousavi
- School of Computing, Queen's University, Kingston, Canada
| |
Collapse
|
10
|
Chen X, Liu X, Wu Y, Wang Z, Wang SH. Research related to the diagnosis of prostate cancer based on machine learning medical images: A review. Int J Med Inform 2024; 181:105279. [PMID: 37977054 DOI: 10.1016/j.ijmedinf.2023.105279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 09/06/2023] [Accepted: 10/29/2023] [Indexed: 11/19/2023]
Abstract
BACKGROUND Prostate cancer is currently the second most prevalent cancer among men. Accurate diagnosis of prostate cancer can provide effective treatment for patients and greatly reduce mortality. The current medical imaging tools for screening prostate cancer are mainly MRI, CT and ultrasound. In the past 20 years, these medical imaging methods have made great progress with machine learning, especially the rise of deep learning has led to a wider application of artificial intelligence in the use of image-assisted diagnosis of prostate cancer. METHOD This review collected medical image processing methods, prostate and prostate cancer on MR images, CT images, and ultrasound images through search engines such as web of science, PubMed, and Google Scholar, including image pre-processing methods, segmentation of prostate gland on medical images, registration between prostate gland on different modal images, detection of prostate cancer lesions on the prostate. CONCLUSION Through these collated papers, it is found that the current research on the diagnosis and staging of prostate cancer using machine learning and deep learning is in its infancy, and most of the existing studies are on the diagnosis of prostate cancer and classification of lesions, and the accuracy is low, with the best results having an accuracy of less than 0.95. There are fewer studies on staging. The research is mainly focused on MR images and much less on CT images, ultrasound images. DISCUSSION Machine learning and deep learning combined with medical imaging have a broad application prospect for the diagnosis and staging of prostate cancer, but the research in this area still has more room for development.
Collapse
Affiliation(s)
- Xinyi Chen
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Xiang Liu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Yuke Wu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Zhenglei Wang
- Department of Medical Imaging, Shanghai Electric Power Hospital, Shanghai 201620, China.
| | - Shuo Hong Wang
- Department of Molecular and Cellular Biology and Center for Brain Science, Harvard University, Cambridge, MA 02138, USA.
| |
Collapse
|
11
|
Huang TL, Lu NH, Huang YH, Twan WH, Yeh LR, Liu KY, Chen TB. Transfer learning with CNNs for efficient prostate cancer and BPH detection in transrectal ultrasound images. Sci Rep 2023; 13:21849. [PMID: 38071254 PMCID: PMC10710441 DOI: 10.1038/s41598-023-49159-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 12/05/2023] [Indexed: 12/18/2023] Open
Abstract
Early detection of prostate cancer (PCa) and benign prostatic hyperplasia (BPH) is crucial for maintaining the health and well-being of aging male populations. This study aims to evaluate the performance of transfer learning with convolutional neural networks (CNNs) for efficient classification of PCa and BPH in transrectal ultrasound (TRUS) images. A retrospective experimental design was employed in this study, with 1380 TRUS images for PCa and 1530 for BPH. Seven state-of-the-art deep learning (DL) methods were employed as classifiers with transfer learning applied to popular CNN architectures. Performance indices, including sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), Kappa value, and Hindex (Youden's index), were used to assess the feasibility and efficacy of the CNN methods. The CNN methods with transfer learning demonstrated a high classification performance for TRUS images, with all accuracy, specificity, sensitivity, PPV, NPV, Kappa, and Hindex values surpassing 0.9400. The optimal accuracy, sensitivity, and specificity reached 0.9987, 0.9980, and 0.9980, respectively, as evaluated using twofold cross-validation. The investigated CNN methods with transfer learning showcased their efficiency and ability for the classification of PCa and BPH in TRUS images. Notably, the EfficientNetV2 with transfer learning displayed a high degree of effectiveness in distinguishing between PCa and BPH, making it a promising tool for future diagnostic applications.
Collapse
Affiliation(s)
- Te-Li Huang
- Department of Radiology, Kaohsiung Veterans General Hospital, No. 386, Dazhong 1st Rd., Zuoying Dist., Kaohsiung, 81362, Taiwan
| | - Nan-Han Lu
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung, 82445, Taiwan.
- Department of Pharmacy, Tajen University, No.20, Weixin Rd., Yanpu Township, Pingtung, 90741, Taiwan.
- Department of Radiology, E-DA Hospital, I-Shou University, No.1, Yida Rd., Jiao-Su Village, Yan-Chao District, Kaohsiung, 82445, Taiwan.
| | - Yung-Hui Huang
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung, 82445, Taiwan
| | - Wen-Hung Twan
- Department of Life Sciences, National Taitung University, No.369, Sec. 2, University Rd., Taitung, 95092, Taiwan
| | - Li-Ren Yeh
- Department of Anesthesiology, E-DA Cancer Hospital, I-Shou University, No.1, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung, 82445, Taiwan
| | - Kuo-Ying Liu
- Department of Radiology, E-DA Hospital, I-Shou University, No.1, Yida Rd., Jiao-Su Village, Yan-Chao District, Kaohsiung, 82445, Taiwan
| | - Tai-Been Chen
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung, 82445, Taiwan.
- Institute of Statistics, National Yang Ming Chiao Tung University, No. 1001, University Road, Hsinchu, 30010, Taiwan.
| |
Collapse
|
12
|
Wan P, Xue H, Liu C, Chen F, Kong W, Zhang D. Dynamic Perfusion Representation and Aggregation Network for Nodule Segmentation Using Contrast-Enhanced US. IEEE J Biomed Health Inform 2023; 27:3431-3442. [PMID: 37097791 DOI: 10.1109/jbhi.2023.3270307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/26/2023]
Abstract
Dynamic contrast-enhanced ultrasound (CEUS) imaging has been widely applied in lesion detection and characterization, due to its offered real-time observation of microvascular perfusion. Accurate lesion segmentation is of great importance to the quantitative and qualitative perfusion analysis. In this paper, we propose a novel dynamic perfusion representation and aggregation network (DpRAN) for the automatic segmentation of lesions using dynamic CEUS imaging. The core challenge of this work lies in enhancement dynamics modeling of various perfusion areas. Specifically, we divide enhancement features into the two scales: short-range enhancement patterns and long-range evolution tendency. To effectively represent real-time enhancement characteristics and aggregate them in a global view, we introduce the perfusion excitation (PE) gate and cross-attention temporal aggregation (CTA) module, respectively. Different from the common temporal fusion methods, we also introduce an uncertainty estimation strategy to assist the model to locate the critical enhancement point first, in which a relatively distinguished enhancement pattern is displayed. The segmentation performance of our DpRAN method is validated on our collected CEUS datasets of thyroid nodules. We obtain the mean dice coefficient (DSC) and intersection of union (IoU) of 0.794 and 0.676, respectively. Superior performance demonstrates its efficacy to capture distinguished enhancement characteristics for lesion recognition.
Collapse
|
13
|
He M, Cao Y, Chi C, Yang X, Ramin R, Wang S, Yang G, Mukhtorov O, Zhang L, Kazantsev A, Enikeev M, Hu K. Research progress on deep learning in magnetic resonance imaging-based diagnosis and treatment of prostate cancer: a review on the current status and perspectives. Front Oncol 2023; 13:1189370. [PMID: 37546423 PMCID: PMC10400334 DOI: 10.3389/fonc.2023.1189370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Accepted: 05/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multiparametric magnetic resonance imaging (mpMRI) has emerged as a first-line screening and diagnostic tool for prostate cancer, aiding in treatment selection and noninvasive radiotherapy guidance. However, the manual interpretation of MRI data is challenging and time-consuming, which may impact sensitivity and specificity. With recent technological advances, artificial intelligence (AI) in the form of computer-aided diagnosis (CAD) based on MRI data has been applied to prostate cancer diagnosis and treatment. Among AI techniques, deep learning involving convolutional neural networks contributes to detection, segmentation, scoring, grading, and prognostic evaluation of prostate cancer. CAD systems have automatic operation, rapid processing, and accuracy, incorporating multiple sequences of multiparametric MRI data of the prostate gland into the deep learning model. Thus, they have become a research direction of great interest, especially in smart healthcare. This review highlights the current progress of deep learning technology in MRI-based diagnosis and treatment of prostate cancer. The key elements of deep learning-based MRI image processing in CAD systems and radiotherapy of prostate cancer are briefly described, making it understandable not only for radiologists but also for general physicians without specialized imaging interpretation training. Deep learning technology enables lesion identification, detection, and segmentation, grading and scoring of prostate cancer, and prediction of postoperative recurrence and prognostic outcomes. The diagnostic accuracy of deep learning can be improved by optimizing models and algorithms, expanding medical database resources, and combining multi-omics data and comprehensive analysis of various morphological data. Deep learning has the potential to become the key diagnostic method in prostate cancer diagnosis and treatment in the future.
Collapse
Affiliation(s)
- Mingze He
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Yu Cao
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Changliang Chi
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| | - Xinyi Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Rzayev Ramin
- Department of Radiology, The Second University Clinic, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Shuowen Wang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Guodong Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Otabek Mukhtorov
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Liqun Zhang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, Liaoning, China
| | - Anton Kazantsev
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Mikhail Enikeev
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Kebang Hu
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| |
Collapse
|
14
|
Gilany M, Wilson P, Perera-Ortega A, Jamzad A, To MNN, Fooladgar F, Wodlinger B, Abolmaesumi P, Mousavi P. TRUSformer: improving prostate cancer detection from micro-ultrasound using attention and self-supervision. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02949-4. [PMID: 37217768 DOI: 10.1007/s11548-023-02949-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 05/02/2023] [Indexed: 05/24/2023]
Abstract
PURPOSE A large body of previous machine learning methods for ultrasound-based prostate cancer detection classify small regions of interest (ROIs) of ultrasound signals that lie within a larger needle trace corresponding to a prostate tissue biopsy (called biopsy core). These ROI-scale models suffer from weak labeling as histopathology results available for biopsy cores only approximate the distribution of cancer in the ROIs. ROI-scale models do not take advantage of contextual information that are normally considered by pathologists, i.e., they do not consider information about surrounding tissue and larger-scale trends when identifying cancer. We aim to improve cancer detection by taking a multi-scale, i.e., ROI-scale and biopsy core-scale, approach. METHODS Our multi-scale approach combines (i) an "ROI-scale" model trained using self-supervised learning to extract features from small ROIs and (ii) a "core-scale" transformer model that processes a collection of extracted features from multiple ROIs in the needle trace region to predict the tissue type of the corresponding core. Attention maps, as a by-product, allow us to localize cancer at the ROI scale. RESULTS We analyze this method using a dataset of micro-ultrasound acquired from 578 patients who underwent prostate biopsy, and compare our model to baseline models and other large-scale studies in the literature. Our model shows consistent and substantial performance improvements compared to ROI-scale-only models. It achieves [Formula: see text] AUROC, a statistically significant improvement over ROI-scale classification. We also compare our method to large studies on prostate cancer detection, using other imaging modalities. CONCLUSIONS Taking a multi-scale approach that leverages contextual information improves prostate cancer detection compared to ROI-scale-only models. The proposed model achieves a statistically significant improvement in performance and outperforms other large-scale studies in the literature. Our code is publicly available at www.github.com/med-i-lab/TRUSFormer .
Collapse
Affiliation(s)
- Mahdi Gilany
- School of Computing, Queen's University, Kingston, Canada.
| | - Paul Wilson
- School of Computing, Queen's University, Kingston, Canada
| | | | - Amoon Jamzad
- School of Computing, Queen's University, Kingston, Canada
| | - Minh Nguyen Nhat To
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, Canada
| | - Fahimeh Fooladgar
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, Canada
| | | | - Purang Abolmaesumi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, Canada
| | - Parvin Mousavi
- School of Computing, Queen's University, Kingston, Canada
| |
Collapse
|
15
|
Mokoatle M, Marivate V, Mapiye D, Bornman R, Hayes VM. A review and comparative study of cancer detection using machine learning: SBERT and SimCSE application. BMC Bioinformatics 2023; 24:112. [PMID: 36959534 PMCID: PMC10037872 DOI: 10.1186/s12859-023-05235-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 03/17/2023] [Indexed: 03/25/2023] Open
Abstract
BACKGROUND Using visual, biological, and electronic health records data as the sole input source, pretrained convolutional neural networks and conventional machine learning methods have been heavily employed for the identification of various malignancies. Initially, a series of preprocessing steps and image segmentation steps are performed to extract region of interest features from noisy features. Then, the extracted features are applied to several machine learning and deep learning methods for the detection of cancer. METHODS In this work, a review of all the methods that have been applied to develop machine learning algorithms that detect cancer is provided. With more than 100 types of cancer, this study only examines research on the four most common and prevalent cancers worldwide: lung, breast, prostate, and colorectal cancer. Next, by using state-of-the-art sentence transformers namely: SBERT (2019) and the unsupervised SimCSE (2021), this study proposes a new methodology for detecting cancer. This method requires raw DNA sequences of matched tumor/normal pair as the only input. The learnt DNA representations retrieved from SBERT and SimCSE will then be sent to machine learning algorithms (XGBoost, Random Forest, LightGBM, and CNNs) for classification. As far as we are aware, SBERT and SimCSE transformers have not been applied to represent DNA sequences in cancer detection settings. RESULTS The XGBoost model, which had the highest overall accuracy of 73 ± 0.13 % using SBERT embeddings and 75 ± 0.12 % using SimCSE embeddings, was the best performing classifier. In light of these findings, it can be concluded that incorporating sentence representations from SimCSE's sentence transformer only marginally improved the performance of machine learning models.
Collapse
Affiliation(s)
- Mpho Mokoatle
- Department of Computer Science, University of Pretoria, Pretoria, South Africa.
| | - Vukosi Marivate
- Department of Computer Science, University of Pretoria, Pretoria, South Africa
| | | | - Riana Bornman
- School of Health Systems and Public Health, University of Pretoria, Pretoria, South Africa
| | - Vanessa M Hayes
- School of Medical Sciences, The University of Sydney, Sydney, Australia
- School of Health Systems and Public Health, University of Pretoria, Pretoria, South Africa
| |
Collapse
|
16
|
Feng X, Cai W, Zheng R, Tang L, Zhou J, Wang H, Liao J, Luo B, Cheng W, Wei A, Zhao W, Jing X, Liang P, Yu J, Huang Q. Diagnosis of hepatocellular carcinoma using deep network with multi-view enhanced patterns mined in contrast-enhanced ultrasound data. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 2023; 118:105635. [DOI: 10.1016/j.engappai.2022.105635] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
17
|
Detection of mitotic HEp-2 cell images: role of feature representation and classification framework under class skew. Med Biol Eng Comput 2022; 60:2405-2421. [DOI: 10.1007/s11517-022-02613-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 06/07/2022] [Indexed: 10/17/2022]
|
18
|
Deep convolution neural networks learned image classification for early cancer detection using lightweight. Soft comput 2022. [DOI: 10.1007/s00500-022-07166-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
19
|
Gurwin A, Kowalczyk K, Knecht-Gurwin K, Stelmach P, Nowak Ł, Krajewski W, Szydełko T, Małkiewicz B. Alternatives for MRI in Prostate Cancer Diagnostics-Review of Current Ultrasound-Based Techniques. Cancers (Basel) 2022; 14:1859. [PMID: 35454767 PMCID: PMC9028694 DOI: 10.3390/cancers14081859] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Revised: 04/01/2022] [Accepted: 04/06/2022] [Indexed: 02/04/2023] Open
Abstract
The purpose of this review is to present the current role of ultrasound-based techniques in the diagnostic pathway of prostate cancer (PCa). With overdiagnosis and overtreatment of a clinically insignificant PCa over the past years, multiparametric magnetic resonance imaging (mpMRI) started to be recommended for every patient suspected of PCa before performing a biopsy. It enabled targeted sampling of the suspicious prostate regions, improving the accuracy of the traditional systematic biopsy. However, mpMRI is associated with high costs, relatively low availability, long and separate procedure, or exposure to the contrast agent. The novel ultrasound modalities, such as shear wave elastography (SWE), contrast-enhanced ultrasound (CEUS), or high frequency micro-ultrasound (MicroUS), may be capable of maintaining the performance of mpMRI without its limitations. Moreover, the real-time lesion visualization during biopsy would significantly simplify the diagnostic process. Another value of these new techniques is the ability to enhance the performance of mpMRI by creating the image fusion of multiple modalities. Such models might be further analyzed by artificial intelligence to mark the regions of interest for investigators and help to decide about the biopsy indications. The dynamic development and promising results of new ultrasound-based techniques should encourage researchers to thoroughly study their utilization in prostate imaging.
Collapse
Affiliation(s)
- Adam Gurwin
- University Center of Excellence in Urology, Department of Minimally Invasive and Robotic Urology, Wroclaw Medical University, 50-556 Wroclaw, Poland; (K.K.); (P.S.); (Ł.N.); (W.K.); (T.S.)
| | - Kamil Kowalczyk
- University Center of Excellence in Urology, Department of Minimally Invasive and Robotic Urology, Wroclaw Medical University, 50-556 Wroclaw, Poland; (K.K.); (P.S.); (Ł.N.); (W.K.); (T.S.)
| | - Klaudia Knecht-Gurwin
- Department of Dermatology, Venereology and Allergology, Wroclaw Medical University, 50-368 Wroclaw, Poland;
| | - Paweł Stelmach
- University Center of Excellence in Urology, Department of Minimally Invasive and Robotic Urology, Wroclaw Medical University, 50-556 Wroclaw, Poland; (K.K.); (P.S.); (Ł.N.); (W.K.); (T.S.)
| | - Łukasz Nowak
- University Center of Excellence in Urology, Department of Minimally Invasive and Robotic Urology, Wroclaw Medical University, 50-556 Wroclaw, Poland; (K.K.); (P.S.); (Ł.N.); (W.K.); (T.S.)
| | - Wojciech Krajewski
- University Center of Excellence in Urology, Department of Minimally Invasive and Robotic Urology, Wroclaw Medical University, 50-556 Wroclaw, Poland; (K.K.); (P.S.); (Ł.N.); (W.K.); (T.S.)
| | - Tomasz Szydełko
- University Center of Excellence in Urology, Department of Minimally Invasive and Robotic Urology, Wroclaw Medical University, 50-556 Wroclaw, Poland; (K.K.); (P.S.); (Ł.N.); (W.K.); (T.S.)
| | - Bartosz Małkiewicz
- University Center of Excellence in Urology, Department of Minimally Invasive and Robotic Urology, Wroclaw Medical University, 50-556 Wroclaw, Poland; (K.K.); (P.S.); (Ł.N.); (W.K.); (T.S.)
| |
Collapse
|
20
|
Akatsuka J, Numata Y, Morikawa H, Sekine T, Kayama S, Mikami H, Yanagi M, Endo Y, Takeda H, Toyama Y, Yamaguchi R, Kimura G, Kondo Y, Yamamoto Y. A data-driven ultrasound approach discriminates pathological high grade prostate cancer. Sci Rep 2022; 12:860. [PMID: 35039648 PMCID: PMC8764059 DOI: 10.1038/s41598-022-04951-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 01/04/2022] [Indexed: 12/14/2022] Open
Abstract
Accurate prostate cancer screening is imperative for reducing the risk of cancer death. Ultrasound imaging, although easy, tends to have low resolution and high inter-observer variability. Here, we show that our integrated machine learning approach enabled the detection of pathological high-grade cancer by the ultrasound procedure. Our study included 772 consecutive patients and 2899 prostate ultrasound images obtained at the Nippon Medical School Hospital. We applied machine learning analyses using ultrasound imaging data and clinical data to detect high-grade prostate cancer. The area under the curve (AUC) using clinical data was 0.691. On the other hand, the AUC when using clinical data and ultrasound imaging data was 0.835 (p = 0.007). Our data-driven ultrasound approach offers an efficient tool to triage patients with high-grade prostate cancers and expands the possibility of ultrasound imaging for the prostate cancer detection pathway.
Collapse
Affiliation(s)
- Jun Akatsuka
- Department of Urology, Nippon Medical School Hospital, Tokyo, 113-8603, Japan
- Pathology Informatics Team, RIKEN Center for Advanced Intelligence Project, Tokyo, 103-0027, Japan
| | - Yasushi Numata
- Pathology Informatics Team, RIKEN Center for Advanced Intelligence Project, Tokyo, 103-0027, Japan
| | - Hiromu Morikawa
- Pathology Informatics Team, RIKEN Center for Advanced Intelligence Project, Tokyo, 103-0027, Japan
| | - Tetsuro Sekine
- Department of Radiology, Nippon Medical School Hospital, Tokyo, 113-8603, Japan
| | - Shigenori Kayama
- Department of Urology, Nippon Medical School Hospital, Tokyo, 113-8603, Japan
| | - Hikaru Mikami
- Department of Urology, Nippon Medical School Hospital, Tokyo, 113-8603, Japan
| | - Masato Yanagi
- Department of Urology, Nippon Medical School Hospital, Tokyo, 113-8603, Japan
| | - Yuki Endo
- Department of Urology, Nippon Medical School Hospital, Tokyo, 113-8603, Japan
| | - Hayato Takeda
- Department of Urology, Nippon Medical School Hospital, Tokyo, 113-8603, Japan
| | - Yuka Toyama
- Department of Urology, Nippon Medical School Hospital, Tokyo, 113-8603, Japan
| | - Ruri Yamaguchi
- Pathology Informatics Team, RIKEN Center for Advanced Intelligence Project, Tokyo, 103-0027, Japan
| | - Go Kimura
- Department of Urology, Nippon Medical School Hospital, Tokyo, 113-8603, Japan
| | - Yukihiro Kondo
- Department of Urology, Nippon Medical School Hospital, Tokyo, 113-8603, Japan
| | - Yoichiro Yamamoto
- Pathology Informatics Team, RIKEN Center for Advanced Intelligence Project, Tokyo, 103-0027, Japan.
| |
Collapse
|
21
|
Zhou J, Pan F, Li W, Hu H, Wang W, Huang Q. Feature Fusion for Diagnosis of Atypical Hepatocellular Carcinoma in Contrast- Enhanced Ultrasound. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:114-123. [PMID: 34487493 DOI: 10.1109/tuffc.2021.3110590] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Contrast-enhanced ultrasound (CEUS) is generally employed for focal liver lesions (FLLs) diagnosis. Among the FLLs, atypical hepatocellular carcinoma (HCC) is difficult to distinguish from focal nodular hyperplasia (FNH) in CEUS video. For this reason, we propose and evaluate a feature fusion method to resolve this problem. The proposed algorithm extracts a set of hand-crafted features and the deep features from the CEUS cine clip data. The hand-crafted features include the spatial-temporal feature based on a novel descriptor called Velocity-Similarity and Dissimilarity Matching Local Binary Pattern (V-SDMLBP), and the deep features from a 3-D convolution neural network (3D-CNN). Then the two types of features are fused. Finally, a classifier is employed to diagnose HCC or FNH. Several classifiers have achieved excellent performance, which demonstrates the superiority of the fused features. In addition, compared with general CNNs, the proposed fused features have better interpretability.
Collapse
|
22
|
DDV: A Taxonomy for Deep Learning Methods in Detecting Prostate Cancer. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10485-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
23
|
Komatsu M, Sakai A, Dozen A, Shozu K, Yasutomi S, Machino H, Asada K, Kaneko S, Hamamoto R. Towards Clinical Application of Artificial Intelligence in Ultrasound Imaging. Biomedicines 2021; 9:720. [PMID: 34201827 PMCID: PMC8301304 DOI: 10.3390/biomedicines9070720] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 06/13/2021] [Accepted: 06/18/2021] [Indexed: 12/12/2022] Open
Abstract
Artificial intelligence (AI) is being increasingly adopted in medical research and applications. Medical AI devices have continuously been approved by the Food and Drug Administration in the United States and the responsible institutions of other countries. Ultrasound (US) imaging is commonly used in an extensive range of medical fields. However, AI-based US imaging analysis and its clinical implementation have not progressed steadily compared to other medical imaging modalities. The characteristic issues of US imaging owing to its manual operation and acoustic shadows cause difficulties in image quality control. In this review, we would like to introduce the global trends of medical AI research in US imaging from both clinical and basic perspectives. We also discuss US image preprocessing, ingenious algorithms that are suitable for US imaging analysis, AI explainability for obtaining informed consent, the approval process of medical AI devices, and future perspectives towards the clinical application of AI-based US diagnostic support technologies.
Collapse
Affiliation(s)
- Masaaki Komatsu
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Akira Sakai
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP—Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Ai Dozen
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Kanto Shozu
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Suguru Yasutomi
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP—Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Hidenori Machino
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Ken Asada
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Syuzo Kaneko
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Ryuji Hamamoto
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| |
Collapse
|
24
|
Deep Neural Architectures for Contrast Enhanced Ultrasound (CEUS) Focal Liver Lesions Automated Diagnosis. SENSORS 2021; 21:s21124126. [PMID: 34208548 PMCID: PMC8235629 DOI: 10.3390/s21124126] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 06/04/2021] [Accepted: 06/10/2021] [Indexed: 12/15/2022]
Abstract
Computer vision, biomedical image processing and deep learning are related fields with a tremendous impact on the interpretation of medical images today. Among biomedical image sensing modalities, ultrasound (US) is one of the most widely used in practice, since it is noninvasive, accessible, and cheap. Its main drawback, compared to other imaging modalities, like computed tomography (CT) or magnetic resonance imaging (MRI), consists of the increased dependence on the human operator. One important step toward reducing this dependence is the implementation of a computer-aided diagnosis (CAD) system for US imaging. The aim of the paper is to examine the application of contrast enhanced ultrasound imaging (CEUS) to the problem of automated focal liver lesion (FLL) diagnosis using deep neural networks (DNN). Custom DNN designs are compared with state-of-the-art architectures, either pre-trained or trained from scratch. Our work improves on and broadens previous work in the field in several aspects, e.g., a novel leave-one-patient-out evaluation procedure, which further enabled us to formulate a hard-voting classification scheme. We show the effectiveness of our models, i.e., 88% accuracy reported against a higher number of liver lesion types: hepatocellular carcinomas (HCC), hypervascular metastases (HYPERM), hypovascular metastases (HYPOM), hemangiomas (HEM), and focal nodular hyperplasia (FNH).
Collapse
|
25
|
Wan P, Chen F, Liu C, Kong W, Zhang D. Hierarchical Temporal Attention Network for Thyroid Nodule Recognition Using Dynamic CEUS Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1646-1660. [PMID: 33651687 DOI: 10.1109/tmi.2021.3063421] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Contrast-enhanced ultrasound (CEUS) has emerged as a popular imaging modality in thyroid nodule diagnosis due to its ability to visualize vascular distribution in real time. Recently, a number of learning-based methods are dedicated to mine pathological-related enhancement dynamics and make prediction at one step, ignoring a native diagnostic dependency. In clinics, the differentiation of benign or malignant nodules always precedes the recognition of pathological types. In this paper, we propose a novel hierarchical temporal attention network (HiTAN) for thyroid nodule diagnosis using dynamic CEUS imaging, which unifies dynamic enhancement feature learning and hierarchical nodules classification into a deep framework. Specifically, this method decomposes the diagnosis of nodules into an ordered two-stage classification task, where diagnostic dependency is modeled by Gated Recurrent Units (GRUs). Besides, we design a local-to-global temporal aggregation (LGTA) operator to perform a comprehensive temporal fusion along the hierarchical prediction path. Particularly, local temporal information is defined as typical enhancement patterns identified with the guidance of perfusion representation learned from the differentiation level. Then, we leverage an attention mechanism to embed global enhancement dynamics into each identified salient pattern. In this study, we evaluate the proposed HiTAN method on the collected CEUS dataset of thyroid nodules. Extensive experimental results validate the efficacy of dynamic patterns learning, fusion and hierarchical diagnosis mechanism.
Collapse
|
26
|
Lian S, Li L, Lian G, Xiao X, Luo Z, Li S. A Global and Local Enhanced Residual U-Net for Accurate Retinal Vessel Segmentation. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:852-862. [PMID: 31095493 DOI: 10.1109/tcbb.2019.2917188] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Retinal vessel segmentation is a critical procedure towards the accurate visualization, diagnosis, early treatment, and surgery planning of ocular diseases. Recent deep learning-based approaches have achieved impressive performance in retinal vessel segmentation. However, they usually apply global image pre-processing and take the whole retinal images as input during network training, which have two drawbacks for accurate retinal vessel segmentation. First, these methods lack the utilization of the local patch information. Second, they overlook the geometric constraint that retina only occurs in a specific area within the whole image or the extracted patch. As a consequence, these global-based methods suffer in handling details, such as recognizing the small thin vessels, discriminating the optic disk, etc. To address these drawbacks, this study proposes a Global and Local enhanced residual U-nEt (GLUE) for accurate retinal vessel segmentation, which benefits from both the globally and locally enhanced information inside the retinal region. Experimental results on two benchmark datasets demonstrate the effectiveness of the proposed method, which consistently improves the segmentation accuracy over a conventional U-Net and achieves competitive performance compared to the state-of-the-art.
Collapse
|
27
|
Zhou Y, Huang W, Dong P, Xia Y, Wang S. D-UNet: A Dimension-Fusion U Shape Network for Chronic Stroke Lesion Segmentation. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:940-950. [PMID: 31502985 DOI: 10.1109/tcbb.2019.2939522] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Assessing the location and extent of lesions caused by chronic stroke is critical for medical diagnosis, surgical planning, and prognosis. In recent years, with the rapid development of 2D and 3D convolutional neural networks (CNN), the encoder-decoder structure has shown great potential in the field of medical image segmentation. However, the 2D CNN ignores the 3D information of medical images, while the 3D CNN suffers from high computational resource demands. This paper proposes a new architecture called dimension-fusion-UNet (D-UNet), which combines 2D and 3D convolution innovatively in the encoding stage. The proposed architecture achieves a better segmentation performance than 2D networks, while requiring significantly less computation time in comparison to 3D networks. Furthermore, to alleviate the data imbalance issue between positive and negative samples for the network training, we propose a new loss function called Enhance Mixing Loss (EML). This function adds a weighted focal coefficient and combines two traditional loss functions. The proposed method has been tested on the ATLAS dataset and compared to three state-of-the-art methods. The results demonstrate that the proposed method achieves the best quality performance in terms of DSC = 0.5349 ± 0.2763 and precision = 0.6331 ± 0.295).
Collapse
|
28
|
Moreno S, Bonfante M, Zurek E, Cherezov D, Goldgof D, Hall L, Schabath M. A Radiogenomics Ensemble to Predict EGFR and KRAS Mutations in NSCLC. ACTA ACUST UNITED AC 2021; 7:154-168. [PMID: 33946756 PMCID: PMC8162978 DOI: 10.3390/tomography7020014] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 04/23/2021] [Accepted: 04/27/2021] [Indexed: 12/21/2022]
Abstract
Lung cancer causes more deaths globally than any other type of cancer. To determine the best treatment, detecting EGFR and KRAS mutations is of interest. However, non-invasive ways to obtain this information are not available. Furthermore, many times there is a lack of big enough relevant public datasets, so the performance of single classifiers is not outstanding. In this paper, an ensemble approach is applied to increase the performance of EGFR and KRAS mutation prediction using a small dataset. A new voting scheme, Selective Class Average Voting (SCAV), is proposed and its performance is assessed both for machine learning models and CNNs. For the EGFR mutation, in the machine learning approach, there was an increase in the sensitivity from 0.66 to 0.75, and an increase in AUC from 0.68 to 0.70. With the deep learning approach, an AUC of 0.846 was obtained, and with SCAV, the accuracy of the model was increased from 0.80 to 0.857. For the KRAS mutation, both in the machine learning models (0.65 to 0.71 AUC) and the deep learning models (0.739 to 0.778 AUC), a significant increase in performance was found. The results obtained in this work show how to effectively learn from small image datasets to predict EGFR and KRAS mutations, and that using ensembles with SCAV increases the performance of machine learning classifiers and CNNs. The results provide confidence that as large datasets become available, tools to augment clinical capabilities can be fielded.
Collapse
Affiliation(s)
- Silvia Moreno
- Systems Engineering, Universidad Simon Bolivar, Barranquilla 080001, Colombia;
- Systems Engineering, Universidad del Norte, Atlántico 080001, Colombia;
- Correspondence: ; Tel.: +57-300-555-5132
| | - Mario Bonfante
- Systems Engineering, Universidad Simon Bolivar, Barranquilla 080001, Colombia;
| | - Eduardo Zurek
- Systems Engineering, Universidad del Norte, Atlántico 080001, Colombia;
| | - Dmitry Cherezov
- Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA; (D.C.); (D.G.); (L.H.)
| | - Dmitry Goldgof
- Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA; (D.C.); (D.G.); (L.H.)
| | - Lawrence Hall
- Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA; (D.C.); (D.G.); (L.H.)
| | - Matthew Schabath
- Cancer Epidemiology, Moffit Cancer Center, Tampa, FL 33617, USA;
| |
Collapse
|
29
|
Artificial Intelligence and Machine Learning in Prostate Cancer Patient Management-Current Trends and Future Perspectives. Diagnostics (Basel) 2021; 11:diagnostics11020354. [PMID: 33672608 PMCID: PMC7924061 DOI: 10.3390/diagnostics11020354] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2021] [Revised: 02/16/2021] [Accepted: 02/17/2021] [Indexed: 12/24/2022] Open
Abstract
Artificial intelligence (AI) is the field of computer science that aims to build smart devices performing tasks that currently require human intelligence. Through machine learning (ML), the deep learning (DL) model is teaching computers to learn by example, something that human beings are doing naturally. AI is revolutionizing healthcare. Digital pathology is becoming highly assisted by AI to help researchers in analyzing larger data sets and providing faster and more accurate diagnoses of prostate cancer lesions. When applied to diagnostic imaging, AI has shown excellent accuracy in the detection of prostate lesions as well as in the prediction of patient outcomes in terms of survival and treatment response. The enormous quantity of data coming from the prostate tumor genome requires fast, reliable and accurate computing power provided by machine learning algorithms. Radiotherapy is an essential part of the treatment of prostate cancer and it is often difficult to predict its toxicity for the patients. Artificial intelligence could have a future potential role in predicting how a patient will react to the therapy side effects. These technologies could provide doctors with better insights on how to plan radiotherapy treatment. The extension of the capabilities of surgical robots for more autonomous tasks will allow them to use information from the surgical field, recognize issues and implement the proper actions without the need for human intervention.
Collapse
|
30
|
Raajan NR, Lakshmi VSR, Prabaharan N. Non-Invasive Technique-Based Novel Corona(COVID-19) Virus Detection Using CNN. NATIONAL ACADEMY SCIENCE LETTERS-INDIA 2020; 44:347-350. [PMID: 32836613 PMCID: PMC7391230 DOI: 10.1007/s40009-020-01009-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Revised: 06/18/2020] [Accepted: 07/17/2020] [Indexed: 12/24/2022]
Abstract
A novel human coronavirus 2 (SARS-CoV-2) is an extremely acute respiratory syndrome which was reported in Wuhan, China in the later half 2019. Most of its primary epidemiological aspects are not appropriately known, which has a direct effect on monitoring, practices and controls. The main objective of this work is to propose a high speed, accurate and highly sensitive CT scan approach for diagnosis of COVID19. The CT scan images display several small patches of shadows and interstitial shifts, particularly in the lung periphery. The proposed method utilizes the ResNet architecture Convolution Neural Network for training the images provided by the CT scan to diagnose the coronavirus-affected patients effectively. By comparing the testing images with the training images, the affected patient is identified accurately. The accuracy and specificity are obtained 95.09% and 81.89%, respectively, on the sample dataset based on CT images without the inclusion of another set of data such as geographical location, population density, etc. Also, the sensitivity is obtained 100% in this method. Based on the results, it is evident that the COVID-19 positive patients can be classified perfectly by using the proposed method.
Collapse
Affiliation(s)
- N R Raajan
- Present Address: School of EEE, SASTRA Deemed University, Thanjavur, Tamil nadu India
| | - V S Ramya Lakshmi
- Present Address: School of EEE, SASTRA Deemed University, Thanjavur, Tamil nadu India
| | - Natarajan Prabaharan
- Present Address: School of EEE, SASTRA Deemed University, Thanjavur, Tamil nadu India
| |
Collapse
|
31
|
Turco S, Frinking P, Wildeboer R, Arditi M, Wijkstra H, Lindner JR, Mischi M. Contrast-Enhanced Ultrasound Quantification: From Kinetic Modeling to Machine Learning. ULTRASOUND IN MEDICINE & BIOLOGY 2020; 46:518-543. [PMID: 31924424 DOI: 10.1016/j.ultrasmedbio.2019.11.008] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2019] [Revised: 11/13/2019] [Accepted: 11/14/2019] [Indexed: 05/14/2023]
Abstract
Ultrasound contrast agents (UCAs) have opened up immense diagnostic possibilities by combined use of indicator dilution principles and dynamic contrast-enhanced ultrasound (DCE-US) imaging. UCAs are microbubbles encapsulated in a biocompatible shell. With a rheology comparable to that of red blood cells, UCAs provide an intravascular indicator for functional imaging of the (micro)vasculature by quantitative DCE-US. Several models of the UCA intravascular kinetics have been proposed to provide functional quantitative maps, aiding diagnosis of different pathological conditions. This article is a comprehensive review of the available methods for quantitative DCE-US imaging based on temporal, spatial and spatiotemporal analysis of the UCA kinetics. The recent introduction of novel UCAs that are targeted to specific vascular receptors has advanced DCE-US to a molecular imaging modality. In parallel, new kinetic models of increased complexity have been developed. The extraction of multiple quantitative maps, reflecting complementary variables of the underlying physiological processes, requires an integrative approach to their interpretation. A probabilistic framework based on emerging machine-learning methods represents nowadays the ultimate approach, improving the diagnostic accuracy of DCE-US imaging by optimal combination of the extracted complementary information. The current value and future perspective of all these advances are critically discussed.
Collapse
Affiliation(s)
- Simona Turco
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | | | - Rogier Wildeboer
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Marcel Arditi
- École polytechnique fédérale de Lausanne, Lausanne, Switzerland
| | - Hessel Wijkstra
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands; Amsterdam University Medical Center, Amsterdam, The Netherlands
| | - Jonathan R Lindner
- Knight Cardiovascular Center, Oregon Health & Science University, Portland, Oregon, USA
| | - Massimo Mischi
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|