1
|
Chen J, Yang Y, Liu C, Feng H, Holmes JM, Zhang L, Frank SJ, Simone CB, Ma DJ, Patel SH, Liu W. Critical review of patient outcome study in head and neck cancer radiotherapy. ARXIV 2025:arXiv:2503.15691v1. [PMID: 40166747 PMCID: PMC11957233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 04/02/2025]
Abstract
Rapid technological advances in radiation therapy have significantly improved dose delivery and tumor control for head and neck cancers. However, treatment-related toxicities caused by high-dose exposure to critical structures remain a significant clinical challenge, underscoring the need for accurate prediction of clinical outcomes-encompassing both tumor control and adverse events (AEs). This review critically evaluates the evolution of data-driven approaches in predicting patient outcomes in head and neck cancer patients treated with radiation therapy, from traditional dose-volume constraints to cutting-edge artificial intelligence (AI) and causal inference framework. The integration of linear energy transfer in patient outcomes study, which has uncovered critical mechanisms behind unexpected toxicity, was also introduced for proton therapy. Three transformative methodological advances are reviewed: radiomics, AI-based algorithms, and causal inference frameworks. While radiomics has enabled quantitative characterization of medical images, AI models have demonstrated superior capability than traditional models. However, the field faces significant challenges in translating statistical correlations from real-world data into interventional clinical insights. We highlight that how causal inference methods can bridge this gap by providing a rigorous framework for identifying treatment effects. Looking ahead, we envision that combining these complementary approaches, especially the interventional prediction models, will enable more personalized treatment strategies, ultimately improving both tumor control and quality of life for head and neck cancer patients treated with radiation therapy.
Collapse
Affiliation(s)
- Jingyuan Chen
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
| | - Yunze Yang
- Department of Radiation Oncology, the University of Miami, FL 33136, USA
| | - Chenbin Liu
- Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Hongying Feng
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
- College of Mechanical and Power Engineering, China Three Gorges University, Yichang, Hubei 443002, People’s Republic of China
- Department of Radiation Oncology, Guangzhou Concord Cancer Center, Guangzhou, Guangdong, 510555, People’s Republic of China
| | - Jason M. Holmes
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
| | - Lian Zhang
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
- Department of Oncology, The First Hospital of Hebei Medical University, Shijiazhuang, Hebei, 050023, People’s Republic of China
| | - Steven J. Frank
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | | | - Daniel J. Ma
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN 55905, USA
| | - Samir H. Patel
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
| | - Wei Liu
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
| |
Collapse
|
2
|
Zhang Z, Lei Z, Zhou M, Hasegawa H, Gao S. Complex-Valued Convolutional Gated Recurrent Neural Network for Ultrasound Beamforming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:5668-5679. [PMID: 38598398 DOI: 10.1109/tnnls.2024.3384314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Ultrasound detection is a potent tool for the clinical diagnosis of various diseases due to its real-time, convenient, and noninvasive qualities. Yet, existing ultrasound beamforming and related methods face a big challenge to improve both the quality and speed of imaging for the required clinical applications. The most notable characteristic of ultrasound signal data is its spatial and temporal features. Because most signals are complex-valued, directly processing them by using real-valued networks leads to phase distortion and inaccurate output. In this study, for the first time, we propose a complex-valued convolutional gated recurrent (CCGR) neural network to handle ultrasound analytic signals with the aforementioned properties. The complex-valued network operations proposed in this study improve the beamforming accuracy of complex-valued ultrasound signals over traditional real-valued methods. Further, the proposed deep integration of convolution and recurrent neural networks makes a great contribution to extracting rich and informative ultrasound signal features. Our experimental results reveal its outstanding imaging quality over existing state-of-the-art methods. More significantly, its ultrafast processing speed of only 0.07 s per image promises considerable clinical application potential. The code is available at https://github.com/zhangzm0128/CCGR.
Collapse
|
3
|
Wu S, Thawani R. Tumor-Agnostic Therapies in Practice: Challenges, Innovations, and Future Perspectives. Cancers (Basel) 2025; 17:801. [PMID: 40075649 PMCID: PMC11899253 DOI: 10.3390/cancers17050801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2024] [Revised: 02/22/2025] [Accepted: 02/25/2025] [Indexed: 03/14/2025] Open
Abstract
This review comprehensively analyzes the current landscape of tumor-agnostic therapies in oncology. Tumor-agnostic therapies are designed to target specific molecular alterations rather than the primary site of the tumor, representing a shift in cancer treatment. We discuss recent approvals by regulatory agencies such as the FDA and EMA, highlighting therapies that have demonstrated efficacy across multiple cancer types sharing common alterations. We delve into the trial methodologies that underpin these approvals, emphasizing innovative designs such as basket trials and umbrella trials. These methodologies present unique advantages, including increased efficiency in patient recruitment and the ability to assess drug efficacy in diverse populations rapidly. However, they also entail certain challenges, including the need for robust biomarkers and the complexities of regulatory requirements. Moreover, we examine the promising prospects for developing therapies for rare cancers that exhibit common molecular targets typically associated with more prevalent malignancies. By synthesizing these insights, this review underscores the transformative potential of tumor-agnostic therapies in oncology. It offers a pathway for personalized cancer treatment that transcends conventional histology-based classification.
Collapse
Affiliation(s)
| | - Rajat Thawani
- Section of Hematology/Oncology, Department of Medicine, University of Chicago, Chicago, IL 60637, USA;
| |
Collapse
|
4
|
Munguía-Siu A, Vergara I, Espinoza-Rodríguez JH. The Use of Hybrid CNN-RNN Deep Learning Models to Discriminate Tumor Tissue in Dynamic Breast Thermography. J Imaging 2024; 10:329. [PMID: 39728226 PMCID: PMC11728322 DOI: 10.3390/jimaging10120329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2024] [Revised: 12/13/2024] [Accepted: 12/19/2024] [Indexed: 12/28/2024] Open
Abstract
Breast cancer is one of the leading causes of death for women worldwide, and early detection can help reduce the death rate. Infrared thermography has gained popularity as a non-invasive and rapid method for detecting this pathology and can be further enhanced by applying neural networks to extract spatial and even temporal data derived from breast thermographic images if they are acquired sequentially. In this study, we evaluated hybrid convolutional-recurrent neural network (CNN-RNN) models based on five state-of-the-art pre-trained CNN architectures coupled with three RNNs to discern tumor abnormalities in dynamic breast thermographic images. The hybrid architecture that achieved the best performance for detecting breast cancer was VGG16-LSTM, which showed accuracy (ACC), sensitivity (SENS), and specificity (SPEC) of 95.72%, 92.76%, and 98.68%, respectively, with a CPU runtime of 3.9 s. However, the hybrid architecture that showed the fastest CPU runtime was AlexNet-RNN with 0.61 s, although with lower performance (ACC: 80.59%, SENS: 68.52%, SPEC: 92.76%), but still superior to AlexNet (ACC: 69.41%, SENS: 52.63%, SPEC: 86.18%) with 0.44 s. Our findings show that hybrid CNN-RNN models outperform stand-alone CNN models, indicating that temporal data recovery from dynamic breast thermographs is possible without significantly compromising classifier runtime.
Collapse
Affiliation(s)
- Andrés Munguía-Siu
- Department of Computing, Electronics and Mechatronics, Universidad de las Américas Puebla, Sta. Catarina Martir, San Andrés Cholula 72810, Mexico;
| | - Irene Vergara
- Department of Immunology, Instituto de Investigaciones Biomédicas, Universidad Nacional Autónoma de México, Mexico City 04510, Mexico;
| | - Juan Horacio Espinoza-Rodríguez
- Department of Computing, Electronics and Mechatronics, Universidad de las Américas Puebla, Sta. Catarina Martir, San Andrés Cholula 72810, Mexico;
| |
Collapse
|
5
|
Talyshinskii A, Hameed BMZ, Ravinder PP, Naik N, Randhawa P, Shah M, Rai BP, Tokas T, Somani BK. Catalyzing Precision Medicine: Artificial Intelligence Advancements in Prostate Cancer Diagnosis and Management. Cancers (Basel) 2024; 16:1809. [PMID: 38791888 PMCID: PMC11119252 DOI: 10.3390/cancers16101809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 04/29/2024] [Accepted: 05/07/2024] [Indexed: 05/26/2024] Open
Abstract
BACKGROUND The aim was to analyze the current state of deep learning (DL)-based prostate cancer (PCa) diagnosis with a focus on magnetic resonance (MR) prostate reconstruction; PCa detection/stratification/reconstruction; positron emission tomography/computed tomography (PET/CT); androgen deprivation therapy (ADT); prostate biopsy; associated challenges and their clinical implications. METHODS A search of the PubMed database was conducted based on the inclusion and exclusion criteria for the use of DL methods within the abovementioned areas. RESULTS A total of 784 articles were found, of which, 64 were included. Reconstruction of the prostate, the detection and stratification of prostate cancer, the reconstruction of prostate cancer, and diagnosis on PET/CT, ADT, and biopsy were analyzed in 21, 22, 6, 7, 2, and 6 studies, respectively. Among studies describing DL use for MR-based purposes, datasets with magnetic field power of 3 T, 1.5 T, and 3/1.5 T were used in 18/19/5, 0/1/0, and 3/2/1 studies, respectively, of 6/7 studies analyzing DL for PET/CT diagnosis which used data from a single institution. Among the radiotracers, [68Ga]Ga-PSMA-11, [18F]DCFPyl, and [18F]PSMA-1007 were used in 5, 1, and 1 study, respectively. Only two studies that analyzed DL in the context of DT met the inclusion criteria. Both were performed with a single-institution dataset with only manual labeling of training data. Three studies, each analyzing DL for prostate biopsy, were performed with single- and multi-institutional datasets. TeUS, TRUS, and MRI were used as input modalities in two, three, and one study, respectively. CONCLUSION DL models in prostate cancer diagnosis show promise but are not yet ready for clinical use due to variability in methods, labels, and evaluation criteria. Conducting additional research while acknowledging all the limitations outlined is crucial for reinforcing the utility and effectiveness of DL-based models in clinical settings.
Collapse
Affiliation(s)
- Ali Talyshinskii
- Department of Urology and Andrology, Astana Medical University, Astana 010000, Kazakhstan;
| | | | - Prajwal P. Ravinder
- Department of Urology, Kasturba Medical College, Mangaluru, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Nithesh Naik
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Princy Randhawa
- Department of Mechatronics, Manipal University Jaipur, Jaipur 303007, India;
| | - Milap Shah
- Department of Urology, Aarogyam Hospital, Ahmedabad 380014, India;
| | - Bhavan Prasad Rai
- Department of Urology, Freeman Hospital, Newcastle upon Tyne NE7 7DN, UK;
| | - Theodoros Tokas
- Department of Urology, Medical School, University General Hospital of Heraklion, University of Crete, 14122 Heraklion, Greece;
| | - Bhaskar K. Somani
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
- Department of Urology, University Hospital Southampton NHS Trust, Southampton SO16 6YD, UK
| |
Collapse
|
6
|
Hernandez Torres SI, Ruiz A, Holland L, Ortiz R, Snider EJ. Evaluation of Deep Learning Model Architectures for Point-of-Care Ultrasound Diagnostics. Bioengineering (Basel) 2024; 11:392. [PMID: 38671813 PMCID: PMC11048259 DOI: 10.3390/bioengineering11040392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 04/05/2024] [Accepted: 04/13/2024] [Indexed: 04/28/2024] Open
Abstract
Point-of-care ultrasound imaging is a critical tool for patient triage during trauma for diagnosing injuries and prioritizing limited medical evacuation resources. Specifically, an eFAST exam evaluates if there are free fluids in the chest or abdomen but this is only possible if ultrasound scans can be accurately interpreted, a challenge in the pre-hospital setting. In this effort, we evaluated the use of artificial intelligent eFAST image interpretation models. Widely used deep learning model architectures were evaluated as well as Bayesian models optimized for six different diagnostic models: pneumothorax (i) B- or (ii) M-mode, hemothorax (iii) B- or (iv) M-mode, (v) pelvic or bladder abdominal hemorrhage and (vi) right upper quadrant abdominal hemorrhage. Models were trained using images captured in 27 swine. Using a leave-one-subject-out training approach, the MobileNetV2 and DarkNet53 models surpassed 85% accuracy for each M-mode scan site. The different B-mode models performed worse with accuracies between 68% and 74% except for the pelvic hemorrhage model, which only reached 62% accuracy for all model architectures. These results highlight which eFAST scan sites can be easily automated with image interpretation models, while other scan sites, such as the bladder hemorrhage model, will require more robust model development or data augmentation to improve performance. With these additional improvements, the skill threshold for ultrasound-based triage can be reduced, thus expanding its utility in the pre-hospital setting.
Collapse
Affiliation(s)
| | | | | | | | - Eric J. Snider
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, Joint Base San Antonio, Fort Sam Houston, San Antonio, TX 78234, USA; (S.I.H.T.); (A.R.); (L.H.); (R.O.)
| |
Collapse
|
7
|
Wilson PFR, Gilany M, Jamzad A, Fooladgar F, To MNN, Wodlinger B, Abolmaesumi P, Mousavi P. Self-Supervised Learning With Limited Labeled Data for Prostate Cancer Detection in High-Frequency Ultrasound. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:1073-1083. [PMID: 37478033 DOI: 10.1109/tuffc.2023.3297840] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/23/2023]
Abstract
Deep learning-based analysis of high-frequency, high-resolution micro-ultrasound data shows great promise for prostate cancer (PCa) detection. Previous approaches to analysis of ultrasound data largely follow a supervised learning (SL) paradigm. Ground truth labels for ultrasound images used for training deep networks often include coarse annotations generated from the histopathological analysis of tissue samples obtained via biopsy. This creates inherent limitations on the availability and quality of labeled data, posing major challenges to the success of SL methods. However, unlabeled prostate ultrasound data are more abundant. In this work, we successfully apply self-supervised representation learning to micro-ultrasound data. Using ultrasound data from 1028 biopsy cores of 391 subjects obtained in two clinical centers, we demonstrate that feature representations learned with this method can be used to classify cancer from noncancer tissue, obtaining an AUROC score of 91% on an independent test set. To the best of our knowledge, this is the first successful end-to-end self-SL (SSL) approach for PCa detection using ultrasound data. Our method outperforms baseline SL approaches, generalizes well between different data centers, and scales well in performance as more unlabeled data are added, making it a promising approach for future research using large volumes of unlabeled data. Our code is publicly available at https://www.github.com/MahdiGilany/SSL_micro_ultrasound.
Collapse
|
8
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
9
|
Zhang Y, Li W, Zhang Z, Xue Y, Liu YL, Nie K, Su MY, Ye Q. Differential diagnosis of prostate cancer and benign prostatic hyperplasia based on DCE-MRI using bi-directional CLSTM deep learning and radiomics. Med Biol Eng Comput 2023; 61:757-771. [PMID: 36598674 PMCID: PMC10548872 DOI: 10.1007/s11517-022-02759-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Accepted: 12/22/2022] [Indexed: 01/05/2023]
Abstract
Dynamic contrast-enhanced MRI (DCE-MRI) is routinely included in the prostate MRI protocol for a long time; its role has been questioned. It provides rich spatial and temporal information. However, the contained information cannot be fully extracted in radiologists' visual evaluation. More sophisticated computer algorithms are needed to extract the higher-order information. The purpose of this study was to apply a new deep learning algorithm, the bi-directional convolutional long short-term memory (CLSTM) network, and the radiomics analysis for differential diagnosis of PCa and benign prostatic hyperplasia (BPH). To systematically investigate the optimal amount of peritumoral tissue for improving diagnosis, a total of 9 ROIs were delineated by using 3 different methods. The results showed that bi-directional CLSTM with ± 20% region growing peritumoral ROI achieved the mean AUC of 0.89, better than the mean AUC of 0.84 by using the tumor alone without any peritumoral tissue (p = 0.25, not significant). For all 9 ROIs, deep learning had higher AUC than radiomics, but only reaching the significant difference for ± 20% region growing peritumoral ROI (0.89 vs. 0.79, p = 0.04). In conclusion, the kinetic information extracted from DCE-MRI using bi-directional CLSTM may provide helpful supplementary information for diagnosis of PCa.
Collapse
Affiliation(s)
- Yang Zhang
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, USA
- Department of Radiological Sciences, University of California, 164 Irvine Hall, Irvine, CA, 92697, USA
| | - Weikang Li
- Department of Radiology, The Children's Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Zhao Zhang
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yingnan Xue
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yan-Lin Liu
- Department of Radiological Sciences, University of California, 164 Irvine Hall, Irvine, CA, 92697, USA
| | - Ke Nie
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - Min-Ying Su
- Department of Radiological Sciences, University of California, 164 Irvine Hall, Irvine, CA, 92697, USA.
| | - Qiong Ye
- High Magnetic Field Laboratory, Hefei Institutes of Physical Science, Chinese Academy of Sciences, 350 Shushanhu Road, Hefei, 230031, Anhui, People's Republic of China.
| |
Collapse
|
10
|
Bakrania A, Joshi N, Zhao X, Zheng G, Bhat M. Artificial intelligence in liver cancers: Decoding the impact of machine learning models in clinical diagnosis of primary liver cancers and liver cancer metastases. Pharmacol Res 2023; 189:106706. [PMID: 36813095 DOI: 10.1016/j.phrs.2023.106706] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Revised: 02/17/2023] [Accepted: 02/19/2023] [Indexed: 02/22/2023]
Abstract
Liver cancers are the fourth leading cause of cancer-related mortality worldwide. In the past decade, breakthroughs in the field of artificial intelligence (AI) have inspired development of algorithms in the cancer setting. A growing body of recent studies have evaluated machine learning (ML) and deep learning (DL) algorithms for pre-screening, diagnosis and management of liver cancer patients through diagnostic image analysis, biomarker discovery and predicting personalized clinical outcomes. Despite the promise of these early AI tools, there is a significant need to explain the 'black box' of AI and work towards deployment to enable ultimate clinical translatability. Certain emerging fields such as RNA nanomedicine for targeted liver cancer therapy may also benefit from application of AI, specifically in nano-formulation research and development given that they are still largely reliant on lengthy trial-and-error experiments. In this paper, we put forward the current landscape of AI in liver cancers along with the challenges of AI in liver cancer diagnosis and management. Finally, we have discussed the future perspectives of AI application in liver cancer and how a multidisciplinary approach using AI in nanomedicine could accelerate the transition of personalized liver cancer medicine from bench side to the clinic.
Collapse
Affiliation(s)
- Anita Bakrania
- Toronto General Hospital Research Institute, Toronto, ON, Canada; Ajmera Transplant Program, University Health Network, Toronto, ON, Canada; Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada.
| | | | - Xun Zhao
- Toronto General Hospital Research Institute, Toronto, ON, Canada; Ajmera Transplant Program, University Health Network, Toronto, ON, Canada
| | - Gang Zheng
- Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada; Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Mamatha Bhat
- Toronto General Hospital Research Institute, Toronto, ON, Canada; Ajmera Transplant Program, University Health Network, Toronto, ON, Canada; Division of Gastroenterology, Department of Medicine, University Health Network and University of Toronto, Toronto, ON, Canada; Department of Medical Sciences, Toronto, ON, Canada.
| |
Collapse
|
11
|
Guo H, Chao H, Xu S, Wood BJ, Wang J, Yan P. Ultrasound Volume Reconstruction From Freehand Scans Without Tracking. IEEE Trans Biomed Eng 2023; 70:970-979. [PMID: 36103448 PMCID: PMC10011008 DOI: 10.1109/tbme.2022.3206596] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Transrectal ultrasound is commonly used for guiding prostate cancer biopsy, where 3D ultrasound volume reconstruction is often desired. Current methods for 3D reconstruction from freehand ultrasound scans require external tracking devices to provide spatial information of an ultrasound transducer. This paper presents a novel deep learning approach for sensorless ultrasound volume reconstruction, which efficiently exploits content correspondence between ultrasound frames to reconstruct 3D volumes without external tracking. The underlying deep learning model, deep contextual-contrastive network (DC 2-Net), utilizes self-attention to focus on the speckle-rich areas to estimate spatial movement and then minimizes a margin ranking loss for contrastive feature learning. A case-wise correlation loss over the entire input video helps further smooth the estimated trajectory. We train and validate DC 2-Net on two independent datasets, one containing 619 transrectal scans and the other having 100 transperineal scans. Our proposed approach attained superior performance compared with other methods, with a drift rate of 9.64 % and a prostate Dice of 0.89. The promising results demonstrate the capability of deep neural networks for universal ultrasound volume reconstruction from freehand 2D ultrasound scans without tracking information.
Collapse
|
12
|
Olaniyi EO, Komolafe TE, Oyedotun OK, Oyemakinde TT, Abdelaziz M, Khashman A. Eye Melanoma Diagnosis System using Statistical Texture Feature Extraction and Soft Computing Techniques. J Biomed Phys Eng 2023; 13:77-88. [PMID: 36818006 PMCID: PMC9923246 DOI: 10.31661/jbpe.v0i0.2101-1268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 06/26/2021] [Indexed: 06/18/2023]
Abstract
BACKGROUND Eye melanoma is deforming in the eye, growing and developing in tissues inside the middle layer of an eyeball, resulting in dark spots in the iris section of the eye, changes in size, the shape of the pupil, and vision. OBJECTIVE The current study aims to diagnose eye melanoma using a gray level co-occurrence matrix (GLCM) for texture extraction and soft computing techniques, leading to the disease diagnosis faster, time-saving, and prevention of misdiagnosis resulting from the physician's manual approach. MATERIAL AND METHODS In this experimental study, two models are proposed for the diagnosis of eye melanoma, including backpropagation neural networks (BPNN) and radial basis functions network (RBFN). The images used for training and validating were obtained from the eye-cancer database. RESULTS Based on our experiments, our proposed models achieve 92.31% and 94.70% recognition rates for GLCM+BPNN and GLCM+RBFN, respectively. CONCLUSION Based on the comparison of our models with the others, the models used in the current study outperform other proposed models.
Collapse
Affiliation(s)
- Ebenezer Obaloluwa Olaniyi
- Center for Quantum Computational System, Department of Electrical and Electronics Engineering, Adeleke University, Osun State, Nigeria
- European Centre for Research and Academic Affairs, Lefkosa, Turkey
| | - Temitope Emmanuel Komolafe
- Department of Medical Imaging, Suzhou Institute of Biomedical and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Oyebade Kayode Oyedotun
- Interdisciplinary Centre for Security, Reliability, and Trust (SnT), University of Luxembourg, Luxembourg
| | | | - Mohamed Abdelaziz
- Department of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Adnan Khashman
- European Centre for Research and Academic Affairs, Turkey
| |
Collapse
|
13
|
Lu X, Zhang S, Liu Z, Liu S, Huang J, Kong G, Li M, Liang Y, Cui Y, Yang C, Zhao S. Ultrasonographic pathological grading of prostate cancer using automatic region-based Gleason grading network. Comput Med Imaging Graph 2022; 102:102125. [PMID: 36257091 DOI: 10.1016/j.compmedimag.2022.102125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 08/26/2022] [Accepted: 09/20/2022] [Indexed: 11/05/2022]
Abstract
The Gleason scoring system is a reliable method for quantifying the aggressiveness of prostate cancer, which provides an important reference value for clinical assessment on therapeutic strategies. However, to the best of our knowledge, no study has been done on the pathological grading of prostate cancer from single ultrasound images. In this work, a novel Automatic Region-based Gleason Grading (ARGG) network for prostate cancer based on deep learning is proposed. ARGG consists of two stages: (1) a region labeling object detection (RLOD) network is designed to label the prostate cancer lesion region; (2) a Gleason grading network (GNet) is proposed for pathological grading of prostate ultrasound images. In RLOD, a new feature fusion structure Skip-connected Feature Pyramid Network (CFPN) is proposed as an auxiliary branch for extracting features and enhancing the fusion of high-level features and low-level features, which helps to detect the small lesion and extract the image detail information. In GNet, we designed a synchronized pulse enhancement module (SPEM) based on pulse-coupled neural networks for enhancing the results of RLOD detection and used as training samples, and then fed the enhanced results and the original ones into the channel attention classification network (CACN), which introduces an attention mechanism to benefit the prediction of cancer grading. Experimental performance on the dataset of prostate ultrasound images collected from hospitals shows that the proposed Gleason grading model outperforms the manual diagnosis by physicians with a precision of 0.830. In addition, we have evaluated the lesions detection performance of RLOD, which achieves a mean Dice metric of 0.815.
Collapse
Affiliation(s)
- Xu Lu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China; Pazhou Lab, Guangzhou 510330, China
| | - Shulian Zhang
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Zhiyong Liu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Shaopeng Liu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Jun Huang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Guoquan Kong
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Mingzhu Li
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Yinying Liang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Yunneng Cui
- Department of Radiology, Foshan Maternity and Children's Healthcare Hospital Affiliated to Southern Medical University, Foshan 528000, China
| | - Chuan Yang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China.
| | - Shen Zhao
- Department of Artificial Intelligence, Sun Yat-sen University, Guangzhou 510006, China.
| |
Collapse
|
14
|
Li H, Bhatt M, Qu Z, Zhang S, Hartel MC, Khademhosseini A, Cloutier G. Deep learning in ultrasound elastography imaging: A review. Med Phys 2022; 49:5993-6018. [PMID: 35842833 DOI: 10.1002/mp.15856] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2021] [Revised: 02/04/2022] [Accepted: 07/06/2022] [Indexed: 11/11/2022] Open
Abstract
It is known that changes in the mechanical properties of tissues are associated with the onset and progression of certain diseases. Ultrasound elastography is a technique to characterize tissue stiffness using ultrasound imaging either by measuring tissue strain using quasi-static elastography or natural organ pulsation elastography, or by tracing a propagated shear wave induced by a source or a natural vibration using dynamic elastography. In recent years, deep learning has begun to emerge in ultrasound elastography research. In this review, several common deep learning frameworks in the computer vision community, such as multilayer perceptron, convolutional neural network, and recurrent neural network are described. Then, recent advances in ultrasound elastography using such deep learning techniques are revisited in terms of algorithm development and clinical diagnosis. Finally, the current challenges and future developments of deep learning in ultrasound elastography are prospected. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Hongliang Li
- Laboratory of Biorheology and Medical Ultrasonics, University of Montreal Hospital Research Center, Montréal, Québec, Canada.,Institute of Biomedical Engineering, University of Montreal, Montréal, Québec, Canada
| | - Manish Bhatt
- Laboratory of Biorheology and Medical Ultrasonics, University of Montreal Hospital Research Center, Montréal, Québec, Canada
| | - Zhen Qu
- Laboratory of Biorheology and Medical Ultrasonics, University of Montreal Hospital Research Center, Montréal, Québec, Canada
| | - Shiming Zhang
- California Nanosystems Institute, University of California, Los Angeles, California, USA
| | - Martin C Hartel
- California Nanosystems Institute, University of California, Los Angeles, California, USA
| | - Ali Khademhosseini
- California Nanosystems Institute, University of California, Los Angeles, California, USA
| | - Guy Cloutier
- Laboratory of Biorheology and Medical Ultrasonics, University of Montreal Hospital Research Center, Montréal, Québec, Canada.,Institute of Biomedical Engineering, University of Montreal, Montréal, Québec, Canada.,Department of Radiology, Radio-Oncology and Nuclear Medicine, University of Montreal, Montréal, Québec, Canada
| |
Collapse
|
15
|
Zhou B, Yang X, Curran WJ, Liu T. Artificial Intelligence in Quantitative Ultrasound Imaging: A Survey. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2022; 41:1329-1342. [PMID: 34467542 DOI: 10.1002/jum.15819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 08/01/2021] [Accepted: 08/16/2021] [Indexed: 06/13/2023]
Abstract
Quantitative ultrasound (QUS) imaging is a safe, reliable, inexpensive, and real-time technique to extract physically descriptive parameters for assessing pathologies. Compared with other major imaging modalities such as computed tomography and magnetic resonance imaging, QUS suffers from several major drawbacks: poor image quality and inter- and intra-observer variability. Therefore, there is a great need to develop automated methods to improve the image quality of QUS. In recent years, there has been increasing interest in artificial intelligence (AI) applications in medical imaging, and a large number of research studies in AI in QUS have been conducted. The purpose of this review is to describe and categorize recent research into AI applications in QUS. We first introduce the AI workflow and then discuss the various AI applications in QUS. Finally, challenges and future potential AI applications in QUS are discussed.
Collapse
Affiliation(s)
- Boran Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
16
|
Bhattacharya I, Khandwala YS, Vesal S, Shao W, Yang Q, Soerensen SJ, Fan RE, Ghanouni P, Kunder CA, Brooks JD, Hu Y, Rusu M, Sonn GA. A review of artificial intelligence in prostate cancer detection on imaging. Ther Adv Urol 2022; 14:17562872221128791. [PMID: 36249889 PMCID: PMC9554123 DOI: 10.1177/17562872221128791] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/30/2022] [Indexed: 11/07/2022] Open
Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, 1201 Welch Road, Stanford, CA 94305, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yash S. Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Qianye Yang
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Simon J.C. Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology & Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Christian A. Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - James D. Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Geoffrey A. Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
17
|
Elameer AS, Jaber MM, Abd SK. Radiography image analysis using cat swarm optimized deep belief networks. JOURNAL OF INTELLIGENT SYSTEMS 2021; 31:40-54. [DOI: 10.1515/jisys-2021-0172] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/02/2023] Open
Abstract
Abstract
Radiography images are widely utilized in the health sector to recognize the patient health condition. The noise and irrelevant region information minimize the entire disease detection accuracy and computation complexity. Therefore, in this study, statistical Kolmogorov–Smirnov test has been integrated with wavelet transform to overcome the de-noising issues. Then the cat swarm-optimized deep belief network is applied to extract the features from the affected region. The optimized deep learning model reduces the feature training cost and time and improves the overall disease detection accuracy. The network learning process is enhanced according to the AdaDelta learning process, which replaces the learning parameter with a delta value. This process minimizes the error rate while recognizing the disease. The efficiency of the system evaluated using image retrieval in medical application dataset. This process helps to determine the various diseases such as breast, lung, and pediatric studies.
Collapse
Affiliation(s)
- Amer S. Elameer
- Biomedical Informatics College, University of Information Technology and Communications (UOITC) , Baghdad , Iraq
| | - Mustafa Musa Jaber
- Department of Computer Science, Dijlah University Collage , Baghdad , 00964 , Iraq
- Department of Computer Science, Al-Turath University College , Baghdad , Iraq
| | - Sura Khalil Abd
- Department of Computer Science, Dijlah University Collage , Baghdad , 00964 , Iraq
| |
Collapse
|
18
|
Scapicchio C, Gabelloni M, Barucci A, Cioni D, Saba L, Neri E. A deep look into radiomics. LA RADIOLOGIA MEDICA 2021; 126:1296-1311. [PMID: 34213702 PMCID: PMC8520512 DOI: 10.1007/s11547-021-01389-x] [Citation(s) in RCA: 195] [Impact Index Per Article: 48.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Accepted: 06/15/2021] [Indexed: 11/29/2022]
Abstract
Radiomics is a process that allows the extraction and analysis of quantitative data from medical images. It is an evolving field of research with many potential applications in medical imaging. The purpose of this review is to offer a deep look into radiomics, from the basis, deeply discussed from a technical point of view, through the main applications, to the challenges that have to be addressed to translate this process in clinical practice. A detailed description of the main techniques used in the various steps of radiomics workflow, which includes image acquisition, reconstruction, pre-processing, segmentation, features extraction and analysis, is here proposed, as well as an overview of the main promising results achieved in various applications, focusing on the limitations and possible solutions for clinical implementation. Only an in-depth and comprehensive description of current methods and applications can suggest the potential power of radiomics in fostering precision medicine and thus the care of patients, especially in cancer detection, diagnosis, prognosis and treatment evaluation.
Collapse
Affiliation(s)
- Camilla Scapicchio
- Academic Radiology, Department of Translational Research, University of Pisa, Via Roma 67, 56126, Pisa, Italy.
| | - Michela Gabelloni
- Academic Radiology, Department of Translational Research, University of Pisa, Via Roma 67, 56126, Pisa, Italy
| | - Andrea Barucci
- CNR-IFAC Institute of Applied Physics "N. Carrara", 50019, Sesto Fiorentino, Italy
| | - Dania Cioni
- Academic Radiology, Department of Surgical, Medical, Molecular Pathology and Emergency Medicine, University of Pisa, Via Roma 67, 56126, Pisa, Italy
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), Monserrato (Cagliari),Cagliari, Italy
| | - Emanuele Neri
- Academic Radiology, Department of Translational Research, University of Pisa, Via Roma 67, 56126, Pisa, Italy
- Italian Society of Medical and Interventional Radiology, SIRM Foundation, Via della Signora 2, 20122, Milano, Italy
| |
Collapse
|
19
|
Gao Y, Yan P, Kruger U, Cavuoto L, Schwaitzberg S, De S, Intes X. Functional Brain Imaging Reliably Predicts Bimanual Motor Skill Performance in a Standardized Surgical Task. IEEE Trans Biomed Eng 2021; 68:2058-2066. [PMID: 32755850 PMCID: PMC8265734 DOI: 10.1109/tbme.2020.3014299] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Currently, there is a dearth of objective metrics for assessing bi-manual motor skills, which are critical for high-stakes professions such as surgery. Recently, functional near-infrared spectroscopy (fNIRS) has been shown to be effective at classifying motor task types, which can be potentially used for assessing motor performance level. In this work, we use fNIRS data for predicting the performance scores in a standardized bi-manual motor task used in surgical certification and propose a deep-learning framework 'Brain-NET' to extract features from the fNIRS data. Our results demonstrate that the Brain-NET is able to predict bi-manual surgical motor skills based on neuroimaging data accurately ( R2=0.73). Furthermore, the classification ability of the Brain-NET model is demonstrated based on receiver operating characteristic (ROC) curves and area under the curve (AUC) values of 0.91. Hence, these results establish that fNIRS associated with deep learning analysis is a promising method for a bedside, quick and cost-effective assessment of bi-manual skill levels.
Collapse
|
20
|
Kuang M, Hu HT, Li W, Chen SL, Lu XZ. Articles That Use Artificial Intelligence for Ultrasound: A Reader's Guide. Front Oncol 2021; 11:631813. [PMID: 34178622 PMCID: PMC8222674 DOI: 10.3389/fonc.2021.631813] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2020] [Accepted: 05/12/2021] [Indexed: 12/12/2022] Open
Abstract
Artificial intelligence (AI) transforms medical images into high-throughput mineable data. Machine learning algorithms, which can be designed for modeling for lesion detection, target segmentation, disease diagnosis, and prognosis prediction, have markedly promoted precision medicine for clinical decision support. There has been a dramatic increase in the number of articles, including articles on ultrasound with AI, published in only a few years. Given the unique properties of ultrasound that differentiate it from other imaging modalities, including real-time scanning, operator-dependence, and multi-modality, readers should pay additional attention to assessing studies that rely on ultrasound AI. This review offers the readers a targeted guide covering critical points that can be used to identify strong and underpowered ultrasound AI studies.
Collapse
Affiliation(s)
- Ming Kuang
- Department of Medical Ultrasonics, Ultrasomics Artificial Intelligence X-Lab, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China.,Department of Hepatobiliary Surgery, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Hang-Tong Hu
- Department of Medical Ultrasonics, Ultrasomics Artificial Intelligence X-Lab, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Wei Li
- Department of Medical Ultrasonics, Ultrasomics Artificial Intelligence X-Lab, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Shu-Ling Chen
- Department of Medical Ultrasonics, Ultrasomics Artificial Intelligence X-Lab, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Xiao-Zhou Lu
- Department of Traditional Chinese Medicine, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
21
|
Shen YT, Chen L, Yue WW, Xu HX. Artificial intelligence in ultrasound. Eur J Radiol 2021; 139:109717. [PMID: 33962110 DOI: 10.1016/j.ejrad.2021.109717] [Citation(s) in RCA: 95] [Impact Index Per Article: 23.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/28/2021] [Accepted: 04/11/2021] [Indexed: 12/13/2022]
Abstract
Ultrasound (US), a flexible green imaging modality, is expanding globally as a first-line imaging technique in various clinical fields following with the continual emergence of advanced ultrasonic technologies and the well-established US-based digital health system. Actually, in US practice, qualified physicians should manually collect and visually evaluate images for the detection, identification and monitoring of diseases. The diagnostic performance is inevitably reduced due to the intrinsic property of high operator-dependence from US. In contrast, artificial intelligence (AI) excels at automatically recognizing complex patterns and providing quantitative assessment for imaging data, showing high potential to assist physicians in acquiring more accurate and reproducible results. In this article, we will provide a general understanding of AI, machine learning (ML) and deep learning (DL) technologies; We then review the rapidly growing applications of AI-especially DL technology in the field of US-based on the following anatomical regions: thyroid, breast, abdomen and pelvis, obstetrics heart and blood vessels, musculoskeletal system and other organs by covering image quality control, anatomy localization, object detection, lesion segmentation, and computer-aided diagnosis and prognosis evaluation; Finally, we offer our perspective on the challenges and opportunities for the clinical practice of biomedical AI systems in US.
Collapse
Affiliation(s)
- Yu-Ting Shen
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China
| | - Liang Chen
- Department of Gastroenterology, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, 200072, PR China
| | - Wen-Wen Yue
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| | - Hui-Xiong Xu
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| |
Collapse
|
22
|
Dohopolski M, Chen L, Sher D, Wang J. Predicting lymph node metastasis in patients with oropharyngeal cancer by using a convolutional neural network with associated epistemic and aleatoric uncertainty. Phys Med Biol 2020; 65:225002. [PMID: 33179605 DOI: 10.1088/1361-6560/abb71c] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
There can be significant uncertainty when identifying cervical lymph node (LN) metastases in patients with oropharyngeal squamous cell carcinoma (OPSCC) despite the use of modern imaging modalities such as positron emission tomography (PET) and computed tomography (CT) scans. Grossly involved LNs are readily identifiable during routine imaging, but smaller and less PET-avid LNs are harder to classify. We trained a convolutional neural network (CNN) to detect malignant LNs in patients with OPSCC and used quantitative measures of uncertainty to identify the most reliable predictions. Our dataset consisted of images of 791 LNs from 129 patients with OPSCC who had preoperative PET/CT imaging and detailed pathological reports after neck dissections. These LNs were segmented on PET/CT imaging and then labeled according to the pathology reports. An AlexNet-like CNN was trained to classify LNs as malignant or benign. We estimated epistemic and aleatoric uncertainty by using dropout variational inference and test-time augmentation, respectively. CNN performance was stratified according to the median epistemic and aleatoric uncertainty values calculated using the validation cohort. Our model achieved an area under the receiver operating characteristic (ROC) curve (AUC) of 0.99 on the testing dataset. Sensitivity and specificity were 0.94 and 0.90, respectively. Epistemic and aleatoric uncertainty values were statistically larger for false negative and false positive predictions than for true negative and true positive predictions (p < 0.001). Model sensitivity and specificity were 1.0 and 0.98, respectively, for cases with epistemic uncertainty lower than the median value of the incorrect predictions in the validation dataset. For cases with higher epistemic uncertainty, sensitivity and specificity were 0.67 and 0.41, respectively. Model sensitivity and specificity were 1.0 and 0.98, respectively, for cases with aleatoric uncertainty lower than the median value of the incorrect predictions in the validation dataset. For cases with higher aleatoric uncertainty, sensitivity and specificity were 0.67 and 0.37, respectively. We used a CNN to predict the malignant status of LNs in patients with OPSCC with high accuracy, and we showed that uncertainty can be used to quantify a prediction's reliability. Assigning measures of uncertainty to predictions could improve the accuracy of LN classification by efficiently identifying instances where expert evaluation is needed to corroborate a model's prediction.
Collapse
Affiliation(s)
- Michael Dohopolski
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, United States of America
| | | | | | | |
Collapse
|
23
|
|
24
|
Shao Y, Wang J, Wodlinger B, Salcudean SE. Improving Prostate Cancer (PCa) Classification Performance by Using Three-Player Minimax Game to Reduce Data Source Heterogeneity. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3148-3158. [PMID: 32305907 DOI: 10.1109/tmi.2020.2988198] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PCa is a disease with a wide range of tissue patterns and this adds to its classification difficulty. Moreover, the data source heterogeneity, i.e. inconsistent data collected using different machines, under different conditions, by different operators, from patients of different ethnic groups, etc., further hinders the effectiveness of training a generalized PCa classifier. In this paper, for the first time, a Generative Adversarial Network (GAN)-based three-player minimax game framework is used to tackle data source heterogeneity and to improve PCa classification performance, where a proposed modified U-Net is used as the encoder. Our dataset consists of novel high-frequency ExactVu ultrasound (US) data collected from 693 patients at five data centers. Gleason Scores (GSs) are assigned to the 12 prostatic regions of each patient. Two classification tasks: benign vs. malignant and low- vs. high-grade, are conducted and the classification results of different prostatic regions are compared. For benign vs. malignant classification, the three-player minimax game framework achieves an Area Under the Receiver Operating Characteristic (AUC) of 93.4%, a sensitivity of 95.1% and a specificity of 87.7%, respectively, representing significant improvements of 5.0%, 3.9%, and 6.0% compared to those of using heterogeneous data, which confirms its effectiveness in terms of PCa classification.
Collapse
|
25
|
The Application of Artificial Intelligence in Prostate Cancer Management—What Improvements Can Be Expected? A Systematic Review. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10186428] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Artificial Intelligence (AI) is progressively remodeling our daily life. A large amount of information from “big data” now enables machines to perform predictions and improve our healthcare system. AI has the potential to reshape prostate cancer (PCa) management thanks to growing applications in the field. The purpose of this review is to provide a global overview of AI in PCa for urologists, pathologists, radiotherapists, and oncologists to consider future changes in their daily practice. A systematic review was performed, based on PubMed MEDLINE, Google Scholar, and DBLP databases for original studies published in English from January 2009 to January 2019 relevant to PCa, AI, Machine Learning, Artificial Neural Networks, Convolutional Neural Networks, and Natural-Language Processing. Only articles with full text accessible were considered. A total of 1008 articles were reviewed, and 48 articles were included. AI has potential applications in all fields of PCa management: analysis of genetic predispositions, diagnosis in imaging, and pathology to detect PCa or to differentiate between significant and non-significant PCa. AI also applies to PCa treatment, whether surgical intervention or radiotherapy, skills training, or assessment, to improve treatment modalities and outcome prediction. AI in PCa management has the potential to provide a useful role by predicting PCa more accurately, using a multiomic approach and risk-stratifying patients to provide personalized medicine.
Collapse
|
26
|
Song KD. Current status of deep learning applications in abdominal ultrasonography. Ultrasonography 2020; 40:177-182. [PMID: 33242931 PMCID: PMC7994733 DOI: 10.14366/usg.20085] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Accepted: 09/02/2020] [Indexed: 12/12/2022] Open
Abstract
Deep learning is one of the most popular artificial intelligence techniques used in the medical field. Although it is at an early stage compared to deep learning analyses of computed tomography or magnetic resonance imaging, studies applying deep learning to ultrasound imaging have been actively conducted. This review analyzes recent studies that applied deep learning to ultrasound imaging of various abdominal organs and explains the challenges encountered in these applications.
Collapse
Affiliation(s)
- Kyoung Doo Song
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| |
Collapse
|
27
|
Moreira FF, Oliveira HR, Volenec JJ, Rainey KM, Brito LF. Integrating High-Throughput Phenotyping and Statistical Genomic Methods to Genetically Improve Longitudinal Traits in Crops. FRONTIERS IN PLANT SCIENCE 2020; 11:681. [PMID: 32528513 PMCID: PMC7264266 DOI: 10.3389/fpls.2020.00681] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Accepted: 04/30/2020] [Indexed: 05/28/2023]
Abstract
The rapid development of remote sensing in agronomic research allows the dynamic nature of longitudinal traits to be adequately described, which may enhance the genetic improvement of crop efficiency. For traits such as light interception, biomass accumulation, and responses to stressors, the data generated by the various high-throughput phenotyping (HTP) methods requires adequate statistical techniques to evaluate phenotypic records throughout time. As a consequence, information about plant functioning and activation of genes, as well as the interaction of gene networks at different stages of plant development and in response to environmental stimulus can be exploited. In this review, we outline the current analytical approaches in quantitative genetics that are applied to longitudinal traits in crops throughout development, describe the advantages and pitfalls of each approach, and indicate future research directions and opportunities.
Collapse
Affiliation(s)
- Fabiana F. Moreira
- Department of Agronomy, Purdue University, West Lafayette, IN, United States
| | - Hinayah R. Oliveira
- Department of Animal Sciences, Purdue University, West Lafayette, IN, United States
| | - Jeffrey J. Volenec
- Department of Agronomy, Purdue University, West Lafayette, IN, United States
| | - Katy M. Rainey
- Department of Agronomy, Purdue University, West Lafayette, IN, United States
| | - Luiz F. Brito
- Department of Animal Sciences, Purdue University, West Lafayette, IN, United States
| |
Collapse
|
28
|
Javadi G, Samadi S, Bayat S, Pesteie M, Jafari MH, Sojoudi S, Kesch C, Hurtado A, Chang S, Mousavi P, Black P, Abolmaesumi P. Multiple instance learning combined with label invariant synthetic data for guiding systematic prostate biopsy: a feasibility study. Int J Comput Assist Radiol Surg 2020; 15:1023-1031. [PMID: 32356095 DOI: 10.1007/s11548-020-02168-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Accepted: 04/10/2020] [Indexed: 10/24/2022]
Abstract
PURPOSE Ultrasound imaging is routinely used in prostate biopsy, which involves obtaining prostate tissue samples using a systematic, yet, non-targeted approach. This approach is blinded to individual patient intraprostatic pathology, and unfortunately, has a high rate of false negatives. METHODS In this paper, we propose a deep network for improved detection of prostate cancer in systematic biopsy. We address several challenges associated with training such network: (1) Statistical labels: Since biopsy core's pathology report only represents a statistical distribution of cancer within the core, we use multiple instance learning (MIL) networks to enable learning from ultrasound image regions associated with those data; (2) Limited labels: The number of biopsy cores are limited to at most 12 per patient. As a result, the number of samples available for training a deep network is limited. We alleviate this issue by effectively combining Independent Conditional Variational Auto Encoders (ICVAE) with MIL. We train ICVAE to learn label-invariant features of RF data, which is subsequently used to generate synthetic data for improved training of the MIL network. RESULTS Our in vivo study includes data from 339 prostate biopsy cores of 70 patients. We achieve an area under the curve, sensitivity, specificity, and balanced accuracy of 0.68, 0.77, 0.55 and 0.66, respectively. CONCLUSION The proposed approach is generic and can be applied to several other scenarios where unlabeled data and noisy labels in training samples are present.
Collapse
Affiliation(s)
- Golara Javadi
- The University of British Columbia, Vancouver, BC, Canada.
| | - Samareh Samadi
- The University of British Columbia, Vancouver, BC, Canada
| | - Sharareh Bayat
- The University of British Columbia, Vancouver, BC, Canada
| | - Mehran Pesteie
- The University of British Columbia, Vancouver, BC, Canada
| | | | - Samira Sojoudi
- The University of British Columbia, Vancouver, BC, Canada
| | | | | | - Silvia Chang
- Vancouver General Hospital, Vancouver, BC, Canada
| | | | - Peter Black
- Vancouver General Hospital, Vancouver, BC, Canada
| | | |
Collapse
|
29
|
Aminikhanghahi S, Schmitter-Edgecombe M, Cook DJ. Context-Aware Delivery of Ecological Momentary Assessment. IEEE J Biomed Health Inform 2020; 24:1206-1214. [PMID: 31443058 PMCID: PMC8059357 DOI: 10.1109/jbhi.2019.2937116] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Ecological Momentary Assessment (EMA) is an in-the-moment data collection method which avoids retrospective biases and maximizes ecological validity. A challenge in designing EMA systems is finding a time to ask EMA questions that increases participant engagement and improves the quality of data collection. In this work, we introduce SEP-EMA, a machine learning-based method for providing transition-based context-aware EMA prompt timings. We compare our proposed technique with traditional time-based prompting for 19 individuals living in smart homes. Results reveal that SEP-EMA increased participant response rate by 7.19% compared to time-based prompting. Our findings suggest that prompting during activity transitions makes the EMA process more usable and effective by increasing EMA response rates and mitigating loss of data due to low response rates.
Collapse
|
30
|
Zhu Q, Du B, Yan P. Boundary-Weighted Domain Adaptive Neural Network for Prostate MR Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:753-763. [PMID: 31425022 PMCID: PMC7015773 DOI: 10.1109/tmi.2019.2935018] [Citation(s) in RCA: 74] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Accurate segmentation of the prostate from magnetic resonance (MR) images provides useful information for prostate cancer diagnosis and treatment. However, automated prostate segmentation from 3D MR images faces several challenges. The lack of clear edge between the prostate and other anatomical structures makes it challenging to accurately extract the boundaries. The complex background texture and large variation in size, shape and intensity distribution of the prostate itself make segmentation even further complicated. Recently, as deep learning, especially convolutional neural networks (CNNs), emerging as the best performed methods for medical image segmentation, the difficulty in obtaining large number of annotated medical images for training CNNs has become much more pronounced than ever. Since large-scale dataset is one of the critical components for the success of deep learning, lack of sufficient training data makes it difficult to fully train complex CNNs. To tackle the above challenges, in this paper, we propose a boundary-weighted domain adaptive neural network (BOWDA-Net). To make the network more sensitive to the boundaries during segmentation, a boundary-weighted segmentation loss is proposed. Furthermore, an advanced boundary-weighted transfer leaning approach is introduced to address the problem of small medical imaging datasets. We evaluate our proposed model on three different MR prostate datasets. The experimental results demonstrate that the proposed model is more sensitive to object boundaries and outperformed other state-of-the-art methods.
Collapse
Affiliation(s)
- Qikui Zhu
- School of Computer Science, Wuhan University, Wuhan, China
| | - Bo Du
- co-corresponding authors: B. Du (), P. Yan ()
| | - Pingkun Yan
- co-corresponding authors: B. Du (), P. Yan ()
| |
Collapse
|
31
|
Celik N, O'Brien F, Brennan S, Rainbow RD, Dart C, Zheng Y, Coenen F, Barrett-Jolley R. Deep-Channel uses deep neural networks to detect single-molecule events from patch-clamp data. Commun Biol 2020; 3:3. [PMID: 31925311 PMCID: PMC6946689 DOI: 10.1038/s42003-019-0729-3] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Accepted: 12/06/2019] [Indexed: 12/05/2022] Open
Abstract
Single-molecule research techniques such as patch-clamp electrophysiology deliver unique biological insight by capturing the movement of individual proteins in real time, unobscured by whole-cell ensemble averaging. The critical first step in analysis is event detection, so called "idealisation", where noisy raw data are turned into discrete records of protein movement. To date there have been practical limitations in patch-clamp data idealisation; high quality idealisation is typically laborious and becomes infeasible and subjective with complex biological data containing many distinct native single-ion channel proteins gating simultaneously. Here, we show a deep learning model based on convolutional neural networks and long short-term memory architecture can automatically idealise complex single molecule activity more accurately and faster than traditional methods. There are no parameters to set; baseline, channel amplitude or numbers of channels for example. We believe this approach could revolutionise the unsupervised automatic detection of single-molecule transition events in the future.
Collapse
Affiliation(s)
- Numan Celik
- Faculty of Health and Life Science, University of Liverpool, Liverpool, UK
| | - Fiona O'Brien
- Faculty of Health and Life Science, University of Liverpool, Liverpool, UK
| | - Sean Brennan
- Faculty of Health and Life Science, University of Liverpool, Liverpool, UK
| | - Richard D Rainbow
- Faculty of Health and Life Science, University of Liverpool, Liverpool, UK
| | - Caroline Dart
- Faculty of Health and Life Science, University of Liverpool, Liverpool, UK
| | - Yalin Zheng
- Faculty of Health and Life Science, University of Liverpool, Liverpool, UK
| | - Frans Coenen
- Department of Computer Science, University of Liverpool, Liverpool, UK
| | | |
Collapse
|
32
|
Deep neural maps for unsupervised visualization of high-grade cancer in prostate biopsies. Int J Comput Assist Radiol Surg 2019; 14:1009-1016. [PMID: 30905010 DOI: 10.1007/s11548-019-01950-0] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Accepted: 03/15/2019] [Indexed: 12/29/2022]
Abstract
Prostate cancer (PCa) is the most frequent noncutaneous cancer in men. Early detection of PCa is essential for clinical decision making, and reducing metastasis and mortality rates. The current approach for PCa diagnosis is histopathologic analysis of core biopsies taken under transrectal ultrasound guidance (TRUS-guided). Both TRUS-guided systematic biopsy and MR-TRUS-guided fusion biopsy have limitations in accurately identifying PCa, intraoperatively. There is a need to augment this process by visualizing highly probable areas of PCa. Temporal enhanced ultrasound (TeUS) has emerged as a promising modality for PCa detection. Prior work focused on supervised classification of PCa verified by gold standard pathology. Pathology labels are noisy, and data from an entire core have a single label even when significantly heterogeneous. Additionally, supervised methods are limited by data from cores with known pathology, and a significant portion of prostate data is discarded without being used. We provide an end-to-end unsupervised solution to map PCa distribution from TeUS data using an innovative representation learning method, deep neural maps. TeUS data are transformed to a topologically arranged hyper-lattice, where similar samples are closer together in the lattice. Therefore, similar regions of malignant and benign tissue in the prostate are clustered together. Our proposed method increases the number of training samples by several orders of magnitude. Data from biopsy cores with known labels are used to associate the clusters with PCa. Cancer probability maps generated using the unsupervised clustering of TeUS data help intuitively visualize the distribution of abnormal tissue for augmenting TRUS-guided biopsies.
Collapse
|