1
|
Fechter T, Sachpazidis I, Baltas D. The use of deep learning in interventional radiotherapy (brachytherapy): A review with a focus on open source and open data. Z Med Phys 2024; 34:180-196. [PMID: 36376203 PMCID: PMC11156786 DOI: 10.1016/j.zemedi.2022.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 10/07/2022] [Accepted: 10/10/2022] [Indexed: 11/13/2022]
Abstract
Deep learning advanced to one of the most important technologies in almost all medical fields. Especially in areas, related to medical imaging it plays a big role. However, in interventional radiotherapy (brachytherapy) deep learning is still in an early phase. In this review, first, we investigated and scrutinised the role of deep learning in all processes of interventional radiotherapy and directly related fields. Additionally, we summarised the most recent developments. For better understanding, we provide explanations of key terms and approaches to solving common deep learning problems. To reproduce results of deep learning algorithms both source code and training data must be available. Therefore, a second focus of this work is on the analysis of the availability of open source, open data and open models. In our analysis, we were able to show that deep learning plays already a major role in some areas of interventional radiotherapy, but is still hardly present in others. Nevertheless, its impact is increasing with the years, partly self-propelled but also influenced by closely related fields. Open source, data and models are growing in number but are still scarce and unevenly distributed among different research groups. The reluctance in publishing code, data and models limits reproducibility and restricts evaluation to mono-institutional datasets. The conclusion of our analysis is that deep learning can positively change the workflow of interventional radiotherapy but there is still room for improvements when it comes to reproducible results and standardised evaluation methods.
Collapse
Affiliation(s)
- Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany.
| | - Ilias Sachpazidis
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| | - Dimos Baltas
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| |
Collapse
|
2
|
Ataei A, Eggermont F, Verdonschot N, Lessmann N, Tanck E. The effect of deep learning-based lesion segmentation on failure load calculations of metastatic femurs using finite element analysis. Bone 2024; 179:116987. [PMID: 38061504 DOI: 10.1016/j.bone.2023.116987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 11/29/2023] [Accepted: 12/04/2023] [Indexed: 12/17/2023]
Abstract
Bone ranks as the third most frequent tissue affected by cancer metastases, following the lung and liver. Bone metastases are often painful and may result in pathological fracture, which is a major cause of morbidity and mortality in cancer patients. To quantify fracture risk, finite element (FE) analysis has shown to be a promising tool, but metastatic lesions are typically not specifically segmented and therefore their mechanical properties may not be represented adequately. Deep learning methods potentially provide the opportunity to automatically segment these lesions and change the mechanical properties more adequately. In this study, our primary focus was to gain insight into the performance of an automatic segmentation algorithm for femoral metastatic lesions using deep learning methods and the subsequent effects on FE outcomes. The aims were to determine the similarity between manual segmentation and automatic segmentation; the differences in predicted failure load between FE models with automatically segmented osteolytic and mixed lesions and the models with CT-based lesion values (the gold standard); and the effect on the BOne Strength (BOS) score (failure load adjusted for body weight) and subsequent fracture risk assessments. From two patient cohorts, a total number of 50 femurs with osteolytic and mixed metastatic lesions were included in this study. The femurs were segmented from CT images and transferred into FE meshes. The material behavior was implemented as non-linear isotropic. These FE models were considered as gold standard (Finite Element no Segmented Lesion: FE-no-SL), whereby the local calcium equivalent density of both femur and metastatic lesion was extracted from CT-values. Lesions in the femur were manually segmented by two biomechanical experts after which final lesion segmentation for each femur was obtained based on consensus of opinions between two observers. Subsequently, a self-configuring variant of the popular deep learning model U-Net known as nnU-Net was used to automatically segment metastatic lesions within the femur. For these models with segmented lesions (Finite Element with Segmented Lesion: FE-with-SL), the calcium equivalent density within the metastatic lesions was set to zero after being segmented by the neural network, simulating absence of load-bearing capacity of these lesions. The models (either with or without automatically segmented lesions) were loaded incrementally in axial direction until failure was simulated. Dice coefficient was used to evaluate the similarity of the manual and automatic segmentation. Mean calcium equivalent density values within the automatically segmented lesions were calculated. Failure loads and patterns were determined. Furthermore, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated for both groups by comparing the predictions to the occurrence or absence of actual fracture within the patient cohorts. The automatic segmentation algorithm performed in a none-robust manner. Dice coefficients describing the similarity between consented manual and automatic segmentations were relatively low (mean 0.45 ± standard deviation 0.33, median 0.54). Failure load difference between the FE-no-SL and FE-with-SL groups varied from 0 % to 48 % (mean 6.6 %). Correlation analysis of failure loads between the two groups showed a strong relationship (R2 > 0.9). From the 50 cases, four cases showed clear deviations for which models with automatic lesion segmentation (FE-with-SL) showed considerably lower failure loads. In the whole database including osteolytic and mixed lesions, sensitivity and NPV remained the same, but specificity and PPV decreased from 94 % to 83 %, and from 78 % to 54 % respectively from FE-no-SL to FE-with-SL. This study indicates that the nnU-Net yielded none-robust outcomes in femoral lesion segmentation and that other segmentation algorithms should be considered. However, the difference in failure pattern and failure load between FE models with automatically segmented osteolytic and mixed lesions were relatively small in most cases with a few exceptions. On the other hand, the accuracy of fracture risk assessment using the BOS score was lower compared to the FE-no-SL. In conclusion, this study showed that automatic lesion segmentation is a none-solved issue and therefore, quantifying lesion characteristics and the subsequent effect on the fracture risk using deep learning will remain challenging.
Collapse
Affiliation(s)
- Ali Ataei
- Orthopaedic Research Lab, Radboud university medical center, P.O. Box 9101, 6500, HB, Nijmegen, the Netherlands.
| | - Florieke Eggermont
- Orthopaedic Research Lab, Radboud university medical center, P.O. Box 9101, 6500, HB, Nijmegen, the Netherlands
| | - Nico Verdonschot
- Orthopaedic Research Lab, Radboud university medical center, P.O. Box 9101, 6500, HB, Nijmegen, the Netherlands; Laboratory for Biomechanical Engineering, University of Twente, Enschede, the Netherlands
| | - Nikolas Lessmann
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud university medical center, Nijmegen, the Netherlands
| | - Esther Tanck
- Orthopaedic Research Lab, Radboud university medical center, P.O. Box 9101, 6500, HB, Nijmegen, the Netherlands
| |
Collapse
|
3
|
Ghayda RA, Cannarella R, Calogero AE, Shah R, Rambhatla A, Zohdy W, Kavoussi P, Avidor-Reiss T, Boitrelle F, Mostafa T, Saleh R, Toprak T, Birowo P, Salvio G, Calik G, Kuroda S, Kaiyal RS, Ziouziou I, Crafa A, Phuoc NHV, Russo GI, Durairajanayagam D, Al-Hashimi M, Hamoda TAAAM, Pinggera GM, Adriansjah R, Maldonado Rosas I, Arafa M, Chung E, Atmoko W, Rocco L, Lin H, Huyghe E, Kothari P, Solorzano Vazquez JF, Dimitriadis F, Garrido N, Homa S, Falcone M, Sabbaghian M, Kandil H, Ko E, Martinez M, Nguyen Q, Harraz AM, Serefoglu EC, Karthikeyan VS, Tien DMB, Jindal S, Micic S, Bellavia M, Alali H, Gherabi N, Lewis S, Park HJ, Simopoulou M, Sallam H, Ramirez L, Colpi G, Agarwal A. Artificial Intelligence in Andrology: From Semen Analysis to Image Diagnostics. World J Mens Health 2024; 42:39-61. [PMID: 37382282 PMCID: PMC10782130 DOI: 10.5534/wjmh.230050] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 03/10/2023] [Accepted: 03/17/2023] [Indexed: 06/30/2023] Open
Abstract
Artificial intelligence (AI) in medicine has gained a lot of momentum in the last decades and has been applied to various fields of medicine. Advances in computer science, medical informatics, robotics, and the need for personalized medicine have facilitated the role of AI in modern healthcare. Similarly, as in other fields, AI applications, such as machine learning, artificial neural networks, and deep learning, have shown great potential in andrology and reproductive medicine. AI-based tools are poised to become valuable assets with abilities to support and aid in diagnosing and treating male infertility, and in improving the accuracy of patient care. These automated, AI-based predictions may offer consistency and efficiency in terms of time and cost in infertility research and clinical management. In andrology and reproductive medicine, AI has been used for objective sperm, oocyte, and embryo selection, prediction of surgical outcomes, cost-effective assessment, development of robotic surgery, and clinical decision-making systems. In the future, better integration and implementation of AI into medicine will undoubtedly lead to pioneering evidence-based breakthroughs and the reshaping of andrology and reproductive medicine.
Collapse
Affiliation(s)
- Ramy Abou Ghayda
- Urology Institute, University Hospitals, Case Western Reserve University, Cleveland, OH, USA
| | - Rossella Cannarella
- Department of Clinical and Experimental Medicine, University of Catania, Catania, Italy
- Glickman Urological & Kidney Institute, Cleveland Clinic Foundation, Cleveland, OH, USA
| | - Aldo E. Calogero
- Department of Clinical and Experimental Medicine, University of Catania, Catania, Italy
| | - Rupin Shah
- Department of Urology, Lilavati Hospital and Research Centre, Mumbai, India
| | - Amarnath Rambhatla
- Department of Urology, Henry Ford Health System, Vattikuti Urology Institute, Detroit, MI, USA
| | - Wael Zohdy
- Andrology and STDs, Cairo University, Cairo, Egypt
| | - Parviz Kavoussi
- Department of Urology, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Tomer Avidor-Reiss
- Department of Biological Sciences, University of Toledo, Toledo, OH, USA
- Department of Urology, College of Medicine and Life Sciences, University of Toledo, Toledo, OH, USA
| | - Florence Boitrelle
- Reproductive Biology, Fertility Preservation, Andrology, CECOS, Poissy Hospital, Poissy, France
- Department of Biology, Reproduction, Epigenetics, Environment, and Development, Paris Saclay University, UVSQ, INRAE, BREED, Paris, France
| | - Taymour Mostafa
- Andrology, Sexology & STIs Department, Faculty of Medicine, Cairo University, Cairo, Egypt
| | - Ramadan Saleh
- Department of Dermatology, Venereology and Andrology, Faculty of Medicine, Sohag University, Sohag, Egypt
| | - Tuncay Toprak
- Department of Urology, Fatih Sultan Mehmet Training and Research Hospital, University of Health Sciences, Istanbul, Turkey
| | - Ponco Birowo
- Department of Urology, Dr. Cipto Mangunkusumo Hospital, Faculty of Medicine, Universitas Indonesia, Jakarta, Indonesia
| | - Gianmaria Salvio
- Department of Endocrinology, Polytechnic University of Marche, Ancona, Italy
| | - Gokhan Calik
- Department of Urology, Istanbul Medipol University, Istanbul, Turkey
| | - Shinnosuke Kuroda
- Glickman Urological & Kidney Institute, Cleveland Clinic Foundation, Cleveland, OH, USA
- Department of Urology, Reproduction Center, Yokohama City University Medical Center, Yokohama, Japan
| | - Raneen Sawaid Kaiyal
- Glickman Urological & Kidney Institute, Cleveland Clinic Foundation, Cleveland, OH, USA
| | - Imad Ziouziou
- Department of Urology, College of Medicine and Pharmacy, Ibn Zohr University, Agadir, Morocco
| | - Andrea Crafa
- Department of Clinical and Experimental Medicine, University of Catania, Catania, Italy
| | - Nguyen Ho Vinh Phuoc
- Department of Andrology, Binh Dan Hospital, Ho Chi Minh City, Vietnam
- Department of Urology and Andrology, Pham Ngoc Thach University of Medicine, Ho Chi Minh City, Vietnam
| | | | - Damayanthi Durairajanayagam
- Department of Physiology, Faculty of Medicine, Universiti Teknologi MARA, Sungai Buloh Campus, Selangor, Malaysia
| | - Manaf Al-Hashimi
- Department of Urology, Burjeel Hospital, Abu Dhabi, United Arab Emirates (UAE)
- Khalifa University, College of Medicine and Health Science, Abu Dhabi, United Arab Emirates (UAE)
| | - Taha Abo-Almagd Abdel-Meguid Hamoda
- Department of Urology, King Abdulaziz University, Jeddah, Saudi Arabia
- Department of Urology, Faculty of Medicine, Minia University, El-Minia, Egypt
| | | | - Ricky Adriansjah
- Department of Urology, Hasan Sadikin General Hospital, Universitas Padjadjaran, Banding, Indonesia
| | | | - Mohamed Arafa
- Department of Urology, Hamad Medical Corporation, Doha, Qatar
- Department of Urology, Weill Cornell Medical-Qatar, Doha, Qatar
| | - Eric Chung
- Department of Urology, Princess Alexandra Hospital, University of Queensland, Brisbane QLD, Australia
| | - Widi Atmoko
- Department of Urology, Dr. Cipto Mangunkusumo Hospital, Faculty of Medicine, Universitas Indonesia, Jakarta, Indonesia
| | - Lucia Rocco
- Department of Environmental, Biological and Pharmaceutical Sciences and Technologies, University of Campania “Luigi Vanvitelli”, Caserta, Italy
| | - Haocheng Lin
- Department of Urology, Peking University Third Hospital, Peking University, Beijing, China
| | - Eric Huyghe
- Department of Urology and Andrology, University Hospital of Toulouse, Toulouse, France
| | - Priyank Kothari
- Department of Urology, B.Y.L. Nair Charitable Hospital, Topiwala National Medical College, Mumbai, India
| | | | - Fotios Dimitriadis
- Department of Urology, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Nicolas Garrido
- IVIRMA Global Research Alliance, IVI Foundation, Instituto de Investigación Sanitaria La Fe (IIS La Fe), Valencia, Spain
| | - Sheryl Homa
- Department of Biosciences, University of Kent, Canterbury, United Kingdom
| | - Marco Falcone
- Department of Urology, Molinette Hospital, A.O.U. Città della Salute e della Scienza, University of Turin, Torino, Italy
| | - Marjan Sabbaghian
- Department of Andrology, Reproductive Biomedicine Research Center, Royan Institute for Reproductive Biomedicine, ACECR, Tehran, Iran
| | | | - Edmund Ko
- Department of Urology, Loma Linda University Health, Loma Linda, CA, USA
| | - Marlon Martinez
- Section of Urology, Department of Surgery, University of Santo Tomas Hospital, Manila, Philippines
| | - Quang Nguyen
- Section of Urology, Department of Surgery, University of Santo Tomas Hospital, Manila, Philippines
- Center for Andrology and Sexual Medicine, Viet Duc University Hospital, Hanoi, Vietnam
- Department of Urology, Andrology and Sexual Medicine, University of Medicine and Pharmacy, Vietnam National University, Hanoi, Vietnam
| | - Ahmed M. Harraz
- Urology and Nephrology Center, Mansoura University, Mansoura, Egypt
- Department of Surgery, Urology Unit, Farwaniya Hospital, Farwaniya, Kuwait
- Department of Urology, Sabah Al Ahmad Urology Center, Kuwait City, Kuwait
| | - Ege Can Serefoglu
- Department of Urology, Biruni University School of Medicine, Istanbul, Turkey
| | | | - Dung Mai Ba Tien
- Department of Andrology, Binh Dan Hospital, Ho Chi Minh City, Vietnam
| | - Sunil Jindal
- Department of Andrology and Reproductive Medicine, Jindal Hospital, Meerut, India
| | - Sava Micic
- Department of Andrology, Uromedica Polyclinic, Belgrade, Serbia
| | - Marina Bellavia
- Andrology and IVF Center, Next Fertility Procrea, Lugano, Switzerland
| | - Hamed Alali
- King Fahad Specialist Hospital, Dammam, Saudi Arabia
| | - Nazim Gherabi
- Andrology Committee of the Algerian Association of Urology, Algiers, Algeria
| | - Sheena Lewis
- Examen Lab Ltd., Northern Ireland, United Kingdom
| | - Hyun Jun Park
- Department of Urology, Pusan National University School of Medicine, Busan, Korea
- Medical Research Institute of Pusan National University Hospital, Busan, Korea
| | - Mara Simopoulou
- Department of Experimental Physiology, School of Health Sciences, Faculty of Medicine, National and Kapodistrian University of Athens, Athens, Greece
| | - Hassan Sallam
- Alexandria University Faculty of Medicine, Alexandria, Egypt
| | - Liliana Ramirez
- IVF Laboratory, CITMER Reproductive Medicine, Mexico City, Mexico
| | - Giovanni Colpi
- Andrology and IVF Center, Next Fertility Procrea, Lugano, Switzerland
| | - Ashok Agarwal
- Global Andrology Forum, Moreland Hills, OH, USA
- Cleveland Clinic, Cleveland, OH, USA
| | | |
Collapse
|
4
|
Peng T, Dong Y, Di G, Zhao J, Li T, Ren G, Zhang L, Cai J. Boundary delineation in transrectal ultrasound images for region of interest of prostate. Phys Med Biol 2023; 68:195008. [PMID: 37652058 DOI: 10.1088/1361-6560/acf5c5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 08/31/2023] [Indexed: 09/02/2023]
Abstract
Accurate and robust prostate segmentation in transrectal ultrasound (TRUS) images is of great interest for ultrasound-guided brachytherapy for prostate cancer. However, the current practice of manual segmentation is difficult, time-consuming, and prone to errors. To overcome these challenges, we developed an accurate prostate segmentation framework (A-ProSeg) for TRUS images. The proposed segmentation method includes three innovation steps: (1) acquiring the sequence of vertices by using an improved polygonal segment-based method with a small number of radiologist-defined seed points as prior points; (2) establishing an optimal machine learning-based method by using the improved evolutionary neural network; and (3) obtaining smooth contours of the prostate region of interest using the optimized machine learning-based method. The proposed method was evaluated on 266 patients who underwent prostate cancer brachytherapy. The proposed method achieved a high performance against the ground truth with a Dice similarity coefficient of 96.2% ± 2.4%, a Jaccard similarity coefficient of 94.4% ± 3.3%, and an accuracy of 95.7% ± 2.7%; these values are all higher than those obtained using state-of-the-art methods. A sensitivity evaluation on different noise levels demonstrated that our method achieved high robustness against changes in image quality. Meanwhile, an ablation study was performed, and the significance of all the key components of the proposed method was demonstrated.
Collapse
Affiliation(s)
- Tao Peng
- School of Future Science and Engineering, Soochow University, Suzhou, People's Republic of China
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, People's Republic of China
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Yan Dong
- Department of Ultrasonography, The First Affiliated Hospital of Soochow University, Suzhou, People's Republic of China
| | - Gongye Di
- Department of Ultrasonic, Taizhou People's Hospital Affiliated to Nanjing Medical University, Taizhou, People's Republic of China
| | - Jing Zhao
- Department of Ultrasound, Tsinghua University Affiliated Beijing Tsinghua Changgung Hospital, Beijing, People's Republic of China
| | - Tian Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, People's Republic of China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, People's Republic of China
| | - Lei Zhang
- Medical Physics Graduate Program and Data Science Research Center, Duke Kunshan University, Kunshan, Jiangsu, People's Republic of China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, People's Republic of China
| |
Collapse
|
5
|
Masoumi N, Rivaz H, Hacihaliloglu I, Ahmad MO, Reinertsen I, Xiao Y. The Big Bang of Deep Learning in Ultrasound-Guided Surgery: A Review. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:909-919. [PMID: 37028313 DOI: 10.1109/tuffc.2023.3255843] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Ultrasound (US) imaging is a paramount modality in many image-guided surgeries and percutaneous interventions, thanks to its high portability, temporal resolution, and cost-efficiency. However, due to its imaging principles, the US is often noisy and difficult to interpret. Appropriate image processing can greatly enhance the applicability of the imaging modality in clinical practice. Compared with the classic iterative optimization and machine learning (ML) approach, deep learning (DL) algorithms have shown great performance in terms of accuracy and efficiency for US processing. In this work, we conduct a comprehensive review on deep-learning algorithms in the applications of US-guided interventions, summarize the current trends, and suggest future directions on the topic.
Collapse
|
6
|
Thijssen LCP, de Rooij M, Barentsz JO, Huisman HJ. Radiomics based automated quality assessment for T2W prostate MR images. Eur J Radiol 2023; 165:110928. [PMID: 37354769 DOI: 10.1016/j.ejrad.2023.110928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 05/30/2023] [Accepted: 06/12/2023] [Indexed: 06/26/2023]
Abstract
PURPOSE The guidelines for prostate cancer recommend the use of MRI in the prostate cancer pathway. Due to the variability in prostate MR image quality, the reliability of this technique in the detection of prostate cancer is highly variable in clinical practice. This leads to the need for an objective and automated assessment of image quality to ensure an adequate acquisition and hereby to improve the reliability of MRI. The aim of this study is to investigate the feasibility of Blind/referenceless image spatial quality evaluator (Brisque) and radiomics in automated image quality assessment of T2-weighted (T2W) images. METHOD Anonymized axial T2W images from 140 patients were scored for quality using a five-point Likert scale (low, suboptimal, acceptable, good, very good quality) in consensus by two readers. Images were dichotomized into clinically acceptable (very good, good and acceptable quality images) and clinically unacceptable (low and suboptimal quality images) in order to train and verify the model. Radiomics and Brisque features were extracted from a central cuboid volume including the prostate. A reduced feature set was used to fit a Linear Discriminant Analysis (LDA) model to predict image quality. Two hundred times repeated 5-fold cross-validation was used to train the model and test performance by assessing the classification accuracy, the discrimination accuracy as receiver operating curve - area under curve (ROC-AUC), and by generating confusion matrices. RESULTS Thirty-four images were classified as clinically unacceptable and 106 were classified as clinically acceptable. The accuracy of the independent test set (mean ± standard deviation) was 85.4 ± 5.5%. The ROC-AUC was 0.856 (0.851 - 0.861) (mean; 95% confidence interval). CONCLUSIONS Radiomics AI can automatically detect a significant portion of T2W images of suboptimal image quality. This can help improve image quality at the time of acquisition, thus reducing repeat scans and improving diagnostic accuracy.
Collapse
|
7
|
Zhao JZ, Ni R, Chow R, Rink A, Weersink R, Croke J, Raman S. Artificial intelligence applications in brachytherapy: A literature review. Brachytherapy 2023; 22:429-445. [PMID: 37248158 DOI: 10.1016/j.brachy.2023.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 04/02/2023] [Accepted: 04/07/2023] [Indexed: 05/31/2023]
Abstract
PURPOSE Artificial intelligence (AI) has the potential to simplify and optimize various steps of the brachytherapy workflow, and this literature review aims to provide an overview of the work done in this field. METHODS AND MATERIALS We conducted a literature search in June 2022 on PubMed, Embase, and Cochrane for papers that proposed AI applications in brachytherapy. RESULTS A total of 80 papers satisfied inclusion/exclusion criteria. These papers were categorized as follows: segmentation (24), registration and image processing (6), preplanning (13), dose prediction and treatment planning (11), applicator/catheter/needle reconstruction (16), and quality assurance (10). AI techniques ranged from classical models such as support vector machines and decision tree-based learning to newer techniques such as U-Net and deep reinforcement learning, and were applied to facilitate small steps of a process (e.g., optimizing applicator selection) or even automate the entire step of the workflow (e.g., end-to-end preplanning). Many of these algorithms demonstrated human-level performance and offer significant improvements in speed. CONCLUSIONS AI has potential to augment, automate, and/or accelerate many steps of the brachytherapy workflow. We recommend that future studies adhere to standard reporting guidelines. We also stress the importance of using larger sample sizes and reporting results using clinically interpretable measures.
Collapse
Affiliation(s)
- Jonathan Zl Zhao
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Ruiyan Ni
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Ronald Chow
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Temerty Faculty of Medicine, University of Toronto, Toronto, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Alexandra Rink
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Robert Weersink
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Jennifer Croke
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada
| | - Srinivas Raman
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada.
| |
Collapse
|
8
|
Peng T, Wu Y, Zhao J, Wang C, Wang J, Cai J. Ultrasound Prostate Segmentation Using Adaptive Selection Principal Curve and Smooth Mathematical Model. J Digit Imaging 2023; 36:947-963. [PMID: 36729258 PMCID: PMC10287615 DOI: 10.1007/s10278-023-00783-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 12/15/2022] [Accepted: 01/18/2023] [Indexed: 02/03/2023] Open
Abstract
Accurate prostate segmentation in ultrasound images is crucial for the clinical diagnosis of prostate cancer and for performing image-guided prostate surgery. However, it is challenging to accurately segment the prostate in ultrasound images due to their low signal-to-noise ratio, the low contrast between the prostate and neighboring tissues, and the diffuse or invisible boundaries of the prostate. In this paper, we develop a novel hybrid method for segmentation of the prostate in ultrasound images that generates accurate contours of the prostate from a range of datasets. Our method involves three key steps: (1) application of a principal curve-based method to obtain a data sequence comprising data coordinates and their corresponding projection index; (2) use of the projection index as training input for a fractional-order-based neural network that increases the accuracy of results; and (3) generation of a smooth mathematical map (expressed via the parameters of the neural network) that affords a smooth prostate boundary, which represents the output of the neural network (i.e., optimized vertices) and matches the ground truth contour. Experimental evaluation of our method and several other state-of-the-art segmentation methods on datasets of prostate ultrasound images generated at multiple institutions demonstrated that our method exhibited the best capability. Furthermore, our method is robust as it can be applied to segment prostate ultrasound images obtained at multiple institutions based on various evaluation metrics.
Collapse
Affiliation(s)
- Tao Peng
- School of Future Science and Engineering, Soochow University, Suzhou, China.
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China.
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, USA.
| | - Yiyun Wu
- Department of Ultrasound, Jiangsu Province Hospital of Chinese Medicine, Nanjing, Jiangsu, China
| | - Jing Zhao
- Department of Ultrasound, Tsinghua University Affiliated Beijing Tsinghua Changgung Hospital, Beijing, China
| | - Caishan Wang
- Department of Ultrasound, the Second Affiliated Hospital of Soochow University, Suzhou, Jiangsu, China
| | - Jin Wang
- School of Future Science and Engineering, Soochow University, Suzhou, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
9
|
Zhou R, Guo F, Azarpazhooh MR, Spence JD, Gan H, Ding M, Fenster A. Carotid Vessel-Wall-Volume Ultrasound Measurement via a UNet++ Ensemble Algorithm Trained on Small Data Sets. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:1031-1036. [PMID: 36642588 DOI: 10.1016/j.ultrasmedbio.2022.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 11/02/2022] [Accepted: 12/10/2022] [Indexed: 06/17/2023]
Abstract
Vessel wall volume (VWV) is a 3-D ultrasound measurement for the assessment of therapy in patients with carotid atherosclerosis. Deep learning can be used to segment the media-adventitia boundary (MAB) and lumen-intima boundary (LIB) and to quantify VWV automatically; however, it typically requires large training data sets with expert manual segmentation, which are difficult to obtain. In this study, a UNet++ ensemble approach was developed for automated VWV measurement, trained on five small data sets (n = 30 participants) and tested on 100 participants with clinically diagnosed coronary artery disease enrolled in a multicenter CAIN trial. The Dice similarity coefficient (DSC), average symmetric surface distance (ASSD), Pearson correlation coefficient (r), Bland-Altman plots and coefficient of variation (CoV) were used to evaluate algorithm segmentation accuracy, agreement and reproducibility. The UNet++ ensemble yielded DSCs of 91.07%-91.56% and 87.53%-89.44% and ASSDs of 0.10-0.11 mm and 0.33-0.39 mm for the MAB and LIB, respectively; the algorithm VWV measurements were correlated (r = 0.763-0.795, p < 0.001) with manual segmentations, and the CoV for VWV was 8.89%. In addition, the UNet++ ensemble trained on 30 participants achieved a performance similar to that of U-Net and Voxel-FCN trained on 150 participants. These results suggest that our approach could provide accurate and reproducible carotid VWV measurements using relatively small training data sets, supporting deep learning applications for monitoring atherosclerosis progression in research and clinical trials.
Collapse
Affiliation(s)
- Ran Zhou
- School of Computer Science, Hubei University of Technology, Wuhan, Hubei, China
| | - Fumin Guo
- Wuhan National Laboratory for Optoelectronics, Biomedical Engineering, Huazhong University of Science and Technology, Wuhan, Hubei, China.
| | - M Reza Azarpazhooh
- Stroke Prevention and Atherosclerosis Research Centre, Robarts Research Institute, Western University, London, Ontario, Canada
| | - J David Spence
- Stroke Prevention and Atherosclerosis Research Centre, Robarts Research Institute, Western University, London, Ontario, Canada; Imaging Research Laboratories, Robarts Research Institute, Western University, London, Ontario, Canada
| | - Haitao Gan
- School of Computer Science, Hubei University of Technology, Wuhan, Hubei, China
| | - Mingyue Ding
- College of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Aaron Fenster
- Imaging Research Laboratories, Robarts Research Institute, Western University, London, Ontario, Canada
| |
Collapse
|
10
|
Orlando N, Edirisinghe C, Gyacskov I, Vickress J, Sachdeva R, Gomez JA, D'Souza D, Velker V, Mendez LC, Bauman G, Fenster A, Hoover DA. Validation of a surface-based deformable MRI-3D ultrasound image registration algorithm toward clinical implementation for interstitial prostate brachytherapy. Brachytherapy 2023; 22:199-209. [PMID: 36641305 DOI: 10.1016/j.brachy.2022.11.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 11/05/2022] [Accepted: 11/28/2022] [Indexed: 01/13/2023]
Abstract
PURPOSE The purpose of this study was to evaluate and clinically implement a deformable surface-based magnetic resonance imaging (MRI) to three-dimensional ultrasound (US) image registration algorithm for prostate brachytherapy (BT) with the aim to reduce operator dependence and facilitate dose escalation to an MRI-defined target. METHODS AND MATERIALS Our surface-based deformable image registration (DIR) algorithm first translates and scales to align the US- and MR-defined prostate surfaces, followed by deformation of the MR-defined prostate surface to match the US-defined prostate surface. The algorithm performance was assessed in a phantom using three deformation levels, followed by validation in three retrospective high-dose-rate BT clinical cases. For comparison, manual rigid registration and cognitive fusion by physician were also employed. Registration accuracy was assessed using the Dice similarity coefficient (DSC) and target registration error (TRE) for embedded spherical landmarks. The algorithm was then implemented intraoperatively in a prospective clinical case. RESULTS In the phantom, our DIR algorithm demonstrated a mean DSC and TRE of 0.74 ± 0.08 and 0.94 ± 0.49 mm, respectively, significantly improving the performance compared to manual rigid registration with 0.64 ± 0.16 and 1.88 ± 1.24 mm, respectively. Clinical results demonstrated reduced variability compared to the current standard of cognitive fusion by physicians. CONCLUSIONS We successfully validated a DIR algorithm allowing for translation of MR-defined target and organ-at-risk contours into the intraoperative environment. Prospective clinical implementation demonstrated the intraoperative feasibility of our algorithm, facilitating targeted biopsies and dose escalation to the MR-defined lesion. This method provides the potential to standardize the registration procedure between physicians, reducing operator dependence.
Collapse
Affiliation(s)
- Nathan Orlando
- Department of Medical Biophysics, Western University, London, Ontario, Canada; Robarts Research Institute, Western University, London, Ontario, Canada.
| | | | - Igor Gyacskov
- Robarts Research Institute, Western University, London, Ontario, Canada
| | - Jason Vickress
- Department of Oncology, Western University, London, Ontario, Canada; London Health Sciences Centre, London, Ontario, Canada
| | - Robin Sachdeva
- Lawson Health Research Institute, London, Ontario, Canada
| | - Jose A Gomez
- London Health Sciences Centre, London, Ontario, Canada; Department of Pathology and Laboratory Medicine, Western University, London, Ontario, Canada
| | - David D'Souza
- Department of Oncology, Western University, London, Ontario, Canada; London Health Sciences Centre, London, Ontario, Canada
| | - Vikram Velker
- Department of Oncology, Western University, London, Ontario, Canada; London Health Sciences Centre, London, Ontario, Canada
| | - Lucas C Mendez
- Department of Oncology, Western University, London, Ontario, Canada; London Health Sciences Centre, London, Ontario, Canada
| | - Glenn Bauman
- Department of Oncology, Western University, London, Ontario, Canada; London Health Sciences Centre, London, Ontario, Canada
| | - Aaron Fenster
- Department of Medical Biophysics, Western University, London, Ontario, Canada; Robarts Research Institute, Western University, London, Ontario, Canada; Department of Oncology, Western University, London, Ontario, Canada
| | - Douglas A Hoover
- Department of Medical Biophysics, Western University, London, Ontario, Canada; Department of Oncology, Western University, London, Ontario, Canada; London Health Sciences Centre, London, Ontario, Canada
| |
Collapse
|
11
|
Kodenko MR, Vasilev YA, Vladzymyrskyy AV, Omelyanskaya OV, Leonov DV, Blokhin IA, Novik VP, Kulberg NS, Samorodov AV, Mokienko OA, Reshetnikov RV. Diagnostic Accuracy of AI for Opportunistic Screening of Abdominal Aortic Aneurysm in CT: A Systematic Review and Narrative Synthesis. Diagnostics (Basel) 2022; 12:diagnostics12123197. [PMID: 36553204 PMCID: PMC9777560 DOI: 10.3390/diagnostics12123197] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 11/16/2022] [Accepted: 12/14/2022] [Indexed: 12/24/2022] Open
Abstract
In this review, we focused on the applicability of artificial intelligence (AI) for opportunistic abdominal aortic aneurysm (AAA) detection in computed tomography (CT). We used the academic search system PubMed as the primary source for the literature search and Google Scholar as a supplementary source of evidence. We searched through 2 February 2022. All studies on automated AAA detection or segmentation in noncontrast abdominal CT were included. For bias assessment, we developed and used an adapted version of the QUADAS-2 checklist. We included eight studies with 355 cases, of which 273 (77%) contained AAA. The highest risk of bias and level of applicability concerns were observed for the "patient selection" domain, due to the 100% pathology rate in the majority (75%) of the studies. The mean sensitivity value was 95% (95% CI 100-87%), the mean specificity value was 96.6% (95% CI 100-75.7%), and the mean accuracy value was 95.2% (95% CI 100-54.5%). Half of the included studies performed diagnostic accuracy estimation, with only one study having data on all diagnostic accuracy metrics. Therefore, we conducted a narrative synthesis. Our findings indicate high study heterogeneity, requiring further research with balanced noncontrast CT datasets and adherence to reporting standards in order to validate the high sensitivity value obtained.
Collapse
Affiliation(s)
- Maria R. Kodenko
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka Street, 24, Building 1, 127051 Moscow, Russia
- Department of Biomedical Technologies, Bauman Moscow State Technical University, 2nd Baumanskaya Street, 5, Building 1, 105005 Moscow, Russia
- Correspondence:
| | - Yuriy A. Vasilev
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka Street, 24, Building 1, 127051 Moscow, Russia
| | - Anton V. Vladzymyrskyy
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka Street, 24, Building 1, 127051 Moscow, Russia
- Department of Information and Internet Technologies, I.M. Sechenov First Moscow State Medical University (Sechenov University), Trubetskaya Street, 8, Building 2, 119991 Moscow, Russia
| | - Olga V. Omelyanskaya
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka Street, 24, Building 1, 127051 Moscow, Russia
| | - Denis V. Leonov
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka Street, 24, Building 1, 127051 Moscow, Russia
- Department of Fundamentals of Radio Engineering, Moscow Power Engineering Institute, Krasnokazarmennaya Street, 14, Building 1, 111250 Moscow, Russia
| | - Ivan A. Blokhin
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka Street, 24, Building 1, 127051 Moscow, Russia
| | - Vladimir P. Novik
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka Street, 24, Building 1, 127051 Moscow, Russia
| | - Nicholas S. Kulberg
- Federal Research Center “Computer Science and Control” of Russian Academy of Sciences, Vavilova Street, 44, Building 2, 119333 Moscow, Russia
| | - Andrey V. Samorodov
- Department of Biomedical Technologies, Bauman Moscow State Technical University, 2nd Baumanskaya Street, 5, Building 1, 105005 Moscow, Russia
| | - Olesya A. Mokienko
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka Street, 24, Building 1, 127051 Moscow, Russia
| | - Roman V. Reshetnikov
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Petrovka Street, 24, Building 1, 127051 Moscow, Russia
| |
Collapse
|
12
|
Papanastasiou G, García Seco de Herrera A, Wang C, Zhang H, Yang G, Wang G. Focus on machine learning models in medical imaging. Phys Med Biol 2022; 68:010301. [PMID: 36594883 DOI: 10.1088/1361-6560/aca069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 11/04/2022] [Indexed: 12/23/2022]
Affiliation(s)
| | | | | | - Heye Zhang
- Sun Yat-sen University, People's Republic of China
| | | | - Ge Wang
- Rensselaer Polytechnic Institute, United States of America
| |
Collapse
|
13
|
|
14
|
Snider EJ, Hernandez-Torres SI, Avital G, Boice EN. Evaluation of an Object Detection Algorithm for Shrapnel and Development of a Triage Tool to Determine Injury Severity. J Imaging 2022; 8:jimaging8090252. [PMID: 36135417 PMCID: PMC9501864 DOI: 10.3390/jimaging8090252] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 09/07/2022] [Accepted: 09/12/2022] [Indexed: 01/25/2023] Open
Abstract
Emergency medicine in austere environments rely on ultrasound imaging as an essential diagnostic tool. Without extensive training, identifying abnormalities such as shrapnel embedded in tissue, is challenging. Medical professionals with appropriate expertise are limited in resource-constrained environments. Incorporating artificial intelligence models to aid the interpretation can reduce the skill gap, enabling identification of shrapnel, and its proximity to important anatomical features for improved medical treatment. Here, we apply a deep learning object detection framework, YOLOv3, for shrapnel detection in various sizes and locations with respect to a neurovascular bundle. Ultrasound images were collected in a tissue phantom containing shrapnel, vein, artery, and nerve features. The YOLOv3 framework, classifies the object types and identifies the location. In the testing dataset, the model was successful at identifying each object class, with a mean Intersection over Union and average precision of 0.73 and 0.94, respectively. Furthermore, a triage tool was developed to quantify shrapnel distance from neurovascular features that could notify the end user when a proximity threshold is surpassed, and, thus, may warrant evacuation or surgical intervention. Overall, object detection models such as this will be vital to compensate for lack of expertise in ultrasound interpretation, increasing its availability for emergency and military medicine.
Collapse
Affiliation(s)
- Eric J. Snider
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | | | - Guy Avital
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
- Trauma & Combat Medicine Branch, Surgeon General’s Headquarters, Israel Defense Forces, Ramat-Gan 52620, Israel
- Division of Anesthesia, Intensive Care & Pain Management, Tel-Aviv Sourasky Medical Center, Affiliated with the Sackler Faculty of Medicine, Tel-Aviv University, Tel-Aviv 64239, Israel
| | - Emily N. Boice
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
- Correspondence: ; Tel.: +1-210-539-8721
| |
Collapse
|