1
|
Yuan Y, Ahn E, Feng D, Khadra M, Kim J. Z-SSMNet: Zonal-aware Self-supervised Mesh Network for prostate cancer detection and diagnosis with Bi-parametric MRI. Comput Med Imaging Graph 2025; 122:102510. [PMID: 40010011 DOI: 10.1016/j.compmedimag.2025.102510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2024] [Revised: 01/08/2025] [Accepted: 02/07/2025] [Indexed: 02/28/2025]
Abstract
Bi-parametric magnetic resonance imaging (bpMRI) has become a pivotal modality in the detection and diagnosis of clinically significant prostate cancer (csPCa). Developing AI-based systems to identify csPCa using bpMRI can transform prostate cancer (PCa) management by improving efficiency and cost-effectiveness. However, current state-of-the-art methods using convolutional neural networks (CNNs) and Transformers are limited in learning in-plane and three-dimensional spatial information from anisotropic bpMRI. Their performances also depend on the availability of large, diverse, and well-annotated bpMRI datasets. To address these challenges, we propose the Zonal-aware Self-supervised Mesh Network (Z-SSMNet), which adaptively integrates multi-dimensional (2D/2.5D/3D) convolutions to learn dense intra-slice information and sparse inter-slice information of the anisotropic bpMRI in a balanced manner. We also propose a self-supervised learning (SSL) technique that effectively captures both intra-slice and inter-slice semantic information using large-scale unlabeled data. Furthermore, we constrain the network to focus on the zonal anatomical regions to improve the detection and diagnosis capability of csPCa. We conducted extensive experiments on the PI-CAI (Prostate Imaging - Cancer AI) dataset comprising 10000+ multi-center and multi-scanner data. Our Z-SSMNet excelled in both lesion-level detection (AP score of 0.633) and patient-level diagnosis (AUROC score of 0.881), securing the top position in the Open Development Phase of the PI-CAI challenge and maintained strong performance, achieving an AP score of 0.690 and an AUROC score of 0.909, and securing the second-place ranking in the Closed Testing Phase. These findings underscore the potential of AI-driven systems for csPCa diagnosis and management.
Collapse
Affiliation(s)
- Yuan Yuan
- School of Computer Science, Faculty of Engineering, The University of Sydney, Sydney, 2006, NSW, Australia.
| | - Euijoon Ahn
- College of Science and Engineering, James Cook University, Cairns, 4870, QLD, Australia.
| | - Dagan Feng
- School of Computer Science, Faculty of Engineering, The University of Sydney, Sydney, 2006, NSW, Australia; Institute of Translational Medicine, Shanghai Jiao Tong University, Shanghai, 200240, China.
| | - Mohamed Khadra
- Department of Urology, Nepean Hospital, Sydney, 2747, NSW, Australia; Telehealth and Technology Centre, Nepean Blue Mountains Local Health District (NBMLHD), Sydney, 2750, NSW, Australia.
| | - Jinman Kim
- School of Computer Science, Faculty of Engineering, The University of Sydney, Sydney, 2006, NSW, Australia; Telehealth and Technology Centre, Nepean Blue Mountains Local Health District (NBMLHD), Sydney, 2750, NSW, Australia.
| |
Collapse
|
2
|
Yan W, Hu Y, Yang Q, Fu Y, Syer T, Min Z, Punwani S, Emberton M, Barratt DC, Cho CCM, Chiu B. A semi-supervised prototypical network for prostate lesion segmentation from multimodality MRI. Phys Med Biol 2025; 70:085020. [PMID: 40096822 DOI: 10.1088/1361-6560/adc182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2024] [Accepted: 03/17/2025] [Indexed: 03/19/2025]
Abstract
Objective.Prostate lesion segmentation from multiparametric magnetic resonance images is particularly challenging due to the limited availability of labeled data. This scarcity of annotated images makes it difficult for supervised models to learn the complex features necessary for accurate lesion detection and segmentation.Approach.We proposed a novel semi-supervised algorithm that embeds prototype learning into mean-teacher (MT) training to improve the feature representation for unlabeled data. In this method, pseudo-labels generated by the teacher network simultaneously serve as supervision for unlabeled prototype-based segmentation. By enabling prototype segmentation to operate across labeled and unlabeled data, the network enriches the pool of "lesion representative prototypes", and allows prototypes to flow bidirectionally-from support-to-query and query-to-support paths. This intersected, bidirectional information flow strengthens the model's generalization ability. This approach is distinct from the MT algorithm as it involves few-shot training and differs from prototypical learning for adopting unlabeled data for training.Main results.This study evaluated multiple datasets with 767 patients from three different institutions, including the publicly available PROSTATEx/PROSTATEx2 datasets as the holdout institute for reproducibility. The experimental results showed that the proposed algorithm outperformed state-of-the-art semi-supervised methods with limited labeled data, observing an improvement in Dice similarity coefficient with increasing labeled data, ranging from 0.04 to 0.09.Significance.Our method shows promise in improving segmentation outcomes with limited labeled data and potentially aiding clinicians in making informed patient treatment and management decisions88The algorithm implementation has been made available on GitHub viagit@github.com:yanwenCi/semi-proto-seg.git...
Collapse
Affiliation(s)
- Wen Yan
- Department of Electrical Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Hong Kong Special Administrative Region of China, People's Republic of China
- UCL Hawkes Institute; Department of Medical Physics and Biomedical Engineering, University College London, Gower St., London WC1E 6BT, London, United Kingdom
| | - Yipeng Hu
- UCL Hawkes Institute; Department of Medical Physics and Biomedical Engineering, University College London, Gower St., London WC1E 6BT, London, United Kingdom
| | - Qianye Yang
- UCL Hawkes Institute; Department of Medical Physics and Biomedical Engineering, University College London, Gower St., London WC1E 6BT, London, United Kingdom
| | - Yunguan Fu
- UCL Hawkes Institute; Department of Medical Physics and Biomedical Engineering, University College London, Gower St., London WC1E 6BT, London, United Kingdom
- Department of BioAI, InstaDeep, 5 Merchant Sq, London W2 1AY, London, United Kingdom
| | - Tom Syer
- Centre for Medical Imaging, Division of Medicine, University College London, Foley Street, W1W 7TS London, United Kingdom
| | - Zhe Min
- UCL Hawkes Institute; Department of Medical Physics and Biomedical Engineering, University College London, Gower St., London WC1E 6BT, London, United Kingdom
| | - Shonit Punwani
- Centre for Medical Imaging, Division of Medicine, University College London, Foley Street, W1W 7TS London, United Kingdom
| | - Mark Emberton
- Division of Surgery & Interventional Science, University College London, Gower St, London WC1E 6BT, London, United Kingdom
| | - Dean C Barratt
- UCL Hawkes Institute; Department of Medical Physics and Biomedical Engineering, University College London, Gower St., London WC1E 6BT, London, United Kingdom
| | - Carmen C M Cho
- Department of Imaging and Interventional Radiology, Prince of Wales Hospital, 30-32 Ngan Shing Street, Shatin, New Territories, Hong Kong Special Administrative Region of China, People's Republic of China
| | - Bernard Chiu
- Department of Physics & Computer Science, Wilfrid Laurier University, 75 University Avenue West, N2L 3C5 Waterloo, Canada
| |
Collapse
|
3
|
Jiang M, Wang S, Chan KH, Sun Y, Xu Y, Zhang Z, Gao Q, Gao Z, Tong T, Chang HC, Tan T. Multimodal Cross Global Learnable Attention Network for MR images denoising with arbitrary modal missing. Comput Med Imaging Graph 2025; 121:102497. [PMID: 39904265 DOI: 10.1016/j.compmedimag.2025.102497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 12/10/2024] [Accepted: 01/22/2025] [Indexed: 02/06/2025]
Abstract
Magnetic Resonance Imaging (MRI) generates medical images of multiple sequences, i.e., multimodal, from different contrasts. However, noise will reduce the quality of MR images, and then affect the doctor's diagnosis of diseases. Existing filtering methods, transform-domain methods, statistical methods and Convolutional Neural Network (CNN) methods main aim to denoise individual sequences of images without considering the relationships between multiple different sequences. They cannot balance the extraction of high-dimensional and low-dimensional features in MR images, and hard to maintain a good balance between preserving image texture details and denoising strength. To overcome these challenges, this work proposes a controllable Multimodal Cross-Global Learnable Attention Network (MMCGLANet) for MR image denoising with Arbitrary Modal Missing. Specifically, Encoder is employed to extract the shallow features of the image which share weight module, and Convolutional Long Short-Term Memory(ConvLSTM) is employed to extract the associated features between different frames within the same modal. Cross Global Learnable Attention Network(CGLANet) is employed to extract and fuse image features between multimodal and within the same modality. In addition, sequence code is employed to label missing modalities, which allows for Arbitrary Modal Missing during model training, validation, and testing. Experimental results demonstrate that our method has achieved good denoising results on different public and real MR image dataset.
Collapse
Affiliation(s)
- Mingfu Jiang
- Faculty of Applied Sciences, Macao Polytechnic University, R. de Luís Gonzaga Gomes, Macao, 999078, Macao Special Administrative Region of China; College of Information Engineering, Xinyang Agriculture and Forestry University, No. 1 North Ring Road, Pingqiao District, Xinyang, 464000, Henan, China
| | - Shuai Wang
- School of Cyberspace, Hangzhou Dianzi University, No. 65 Wen Yi Road, Hangzhou, 310018, Zhejiang, China
| | - Ka-Hou Chan
- Faculty of Applied Sciences, Macao Polytechnic University, R. de Luís Gonzaga Gomes, Macao, 999078, Macao Special Administrative Region of China
| | - Yue Sun
- Faculty of Applied Sciences, Macao Polytechnic University, R. de Luís Gonzaga Gomes, Macao, 999078, Macao Special Administrative Region of China
| | - Yi Xu
- Shanghai Key Lab of Digital Media Processing and Transmission, Shanghai Jiao Tong University MoE Key Lab of Artificial Intelligence, Shanghai Jiao Tong University, No. 800 Dongchuan Road, Minhang District, Shanghai, 200030, China
| | - Zhuoneng Zhang
- Faculty of Applied Sciences, Macao Polytechnic University, R. de Luís Gonzaga Gomes, Macao, 999078, Macao Special Administrative Region of China
| | - Qinquan Gao
- College of Physics and Information Engineering, Fuzhou University, No. 2 Wulongjiang Avenue, Fuzhou, 350108, Fujian, China
| | - Zhifan Gao
- School of Biomedical Engineering, Sun Yat-sen University, No. 66 Gongchang Road, Guangming District, Shenzhen, 518107, Guangdong, China
| | - Tong Tong
- College of Physics and Information Engineering, Fuzhou University, No. 2 Wulongjiang Avenue, Fuzhou, 350108, Fujian, China
| | - Hing-Chiu Chang
- Department of Biomedical Engineering, Chinese University of Hong Kong, Sha Tin District, 999077, Hong Kong, China
| | - Tao Tan
- Faculty of Applied Sciences, Macao Polytechnic University, R. de Luís Gonzaga Gomes, Macao, 999078, Macao Special Administrative Region of China.
| |
Collapse
|
4
|
Santinha J, Pinto Dos Santos D, Laqua F, Visser JJ, Groot Lipman KBW, Dietzel M, Klontzas ME, Cuocolo R, Gitto S, Akinci D'Antonoli T. ESR Essentials: radiomics-practice recommendations by the European Society of Medical Imaging Informatics. Eur Radiol 2025; 35:1122-1132. [PMID: 39453470 PMCID: PMC11835989 DOI: 10.1007/s00330-024-11093-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Revised: 08/07/2024] [Accepted: 08/22/2024] [Indexed: 10/26/2024]
Abstract
Radiomics is a method to extract detailed information from diagnostic images that cannot be perceived by the naked eye. Although radiomics research carries great potential to improve clinical decision-making, its inherent methodological complexities make it difficult to comprehend every step of the analysis, often causing reproducibility and generalizability issues that hinder clinical adoption. Critical steps in the radiomics analysis and model development pipeline-such as image, application of image filters, and selection of feature extraction parameters-can greatly affect the values of radiomic features. Moreover, common errors in data partitioning, model comparison, fine-tuning, assessment, and calibration can reduce reproducibility and impede clinical translation. Clinical adoption of radiomics also requires a deep understanding of model explainability and the development of intuitive interpretations of radiomic features. To address these challenges, it is essential for radiomics model developers and clinicians to be well-versed in current best practices. Proper knowledge and application of these practices is crucial for accurate radiomics feature extraction, robust model development, and thorough assessment, ultimately increasing reproducibility, generalizability, and the likelihood of successful clinical translation. In this article, we have provided researchers with our recommendations along with practical examples to facilitate good research practices in radiomics. KEY POINTS: Radiomics' inherent methodological complexity should be understood to ensure rigorous radiomic model development to improve clinical decision-making. Adherence to radiomics-specific checklists and quality assessment tools ensures methodological rigor. Use of standardized radiomics tools and best practices enhances clinical translation of radiomics models.
Collapse
Affiliation(s)
- João Santinha
- Digital Surgery LAB, Champalimaud Research, Champalimaud Foundation, Av. Brasília, 1400-038, Lisbon, Portugal.
- Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001, Lisbon, Portugal.
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany
- Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - Fabian Laqua
- Department of Diagnostic and Interventional Radiology, University Hospital Wuerzburg, Wuerzburg, Germany
| | - Jacob J Visser
- Department of Radiology & Nuclear Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | - Kevin B W Groot Lipman
- Department of Radiology, The Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Thoracic Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Matthias Dietzel
- Department of Radiology, University Hospital Erlangen, Maximiliansplatz 3, 91054, Erlangen, Germany
| | - Michail E Klontzas
- Department of Radiology, School of Medicine, University of Crete, Heraklion, Crete, Greece
- Department of Medical Imaging, University Hospital of Heraklion, Crete, Greece
- Division of Radiology, Department of Clinical Science Intervention and Technology (CLINTEC), Karolinska Institute, Solna, Sweden
| | - Renato Cuocolo
- Department of Medicine, Surgery and Dentistry, University of Salerno, Baronissi, Italy
| | - Salvatore Gitto
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
| | - Tugba Akinci D'Antonoli
- Institute of Radiology and Nuclear Medicine, Cantonal Hospital Baselland, Liestal, Switzerland
| |
Collapse
|
5
|
Giganti F, Moreira da Silva N, Yeung M, Davies L, Frary A, Ferrer Rodriguez M, Sushentsev N, Ashley N, Andreou A, Bradley A, Wilson C, Maskell G, Brembilla G, Caglic I, Suchánek J, Budd J, Arya Z, Aning J, Hayes J, De Bono M, Vasdev N, Sanmugalingam N, Burn P, Persad R, Woitek R, Hindley R, Liyanage S, Squire S, Barrett T, Barwick S, Hinton M, Padhani AR, Rix A, Shah A, Sala E. AI-powered prostate cancer detection: a multi-centre, multi-scanner validation study. Eur Radiol 2025:10.1007/s00330-024-11323-0. [PMID: 40016318 DOI: 10.1007/s00330-024-11323-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2024] [Revised: 12/13/2024] [Accepted: 12/13/2024] [Indexed: 03/01/2025]
Abstract
OBJECTIVES Multi-centre, multi-vendor validation of artificial intelligence (AI) software to detect clinically significant prostate cancer (PCa) using multiparametric magnetic resonance imaging (MRI) is lacking. We compared a new AI solution, validated on a separate dataset from different UK hospitals, to the original multidisciplinary team (MDT)-supported radiologist's interpretations. MATERIALS AND METHODS A Conformité Européenne (CE)-marked deep-learning (DL) computer-aided detection (CAD) medical device (Pi) was trained to detect Gleason Grade Group (GG) ≥ 2 cancer using retrospective data from the PROSTATEx dataset and five UK hospitals (793 patients). Our separate validation dataset was on six machines from two manufacturers across six sites (252 patients). Data included in the study were from MRI scans performed between August 2018 to October 2022. Patients with a negative MRI who did not undergo biopsy were assumed to be negative (90.4% had prostate-specific antigen density < 0.15 ng/mL2). ROC analysis was used to compare radiologists who used a 5-category suspicion score. RESULTS GG ≥ 2 prevalence in the validation set was 31%. Evaluated per patient, Pi was non-inferior to radiologists (considering a 10% performance difference as acceptable), with an area under the curve (AUC) of 0.91 vs. 0.95. At the predetermined risk threshold of 3.5, the AI software's sensitivity was 95% and specificity 67%, while radiologists at Prostate Imaging-Reporting and Data Systems/Likert ≥ 3 identified GG ≥ 2 with a sensitivity of 99% and specificity of 73%. AI performed well per-site (AUC ≥ 0.83) at the patient-level independent of scanner age and field strength. CONCLUSION Real-world data testing suggests that Pi matches the performance of MDT-supported radiologists in GG ≥ 2 PCa detection and generalises to multiple sites, scanner vendors, and models. KEY POINTS QuestionThe performance of artificial intelligence-based medical tools for prostate MRI has yet to be evaluated on multi-centre, multi-vendor data to assess generalisability. FindingsA dedicated AI medical tool matches the performance of multidisciplinary team-supported radiologists in prostate cancer detection and generalises to multiple sites and scanners. Clinical relevanceThis software has the potential to support the MRI process for biopsy decision-making and target identification, but future prospective studies, where lesions identified by artificial intelligence are biopsied separately, are needed.
Collapse
Affiliation(s)
- Francesco Giganti
- Department of Radiology, University College London Hospitals NHS Foundation Trust, London, UK.
- Division of Surgery & Interventional Science, University College London, London, UK.
| | | | | | | | | | | | - Nikita Sushentsev
- Cambridge University Hospitals NHS Foundation Trust & University of Cambridge, Cambridge, UK
| | - Nicholas Ashley
- Lucida Medical Ltd, Cambridge, UK
- Royal Cornwall Hospitals NHS Trust, Truro, UK
| | - Adrian Andreou
- Royal United Hospitals Bath NHS Foundation Trust, Bath, UK
| | | | | | | | - Giorgio Brembilla
- IRCCS San Raffaele Scientific Institute, Vita-Salute San Raffaele University, Milan, Italy
| | - Iztok Caglic
- Cambridge University Hospitals NHS Foundation Trust & University of Cambridge, Cambridge, UK
| | | | | | | | | | - John Hayes
- East and North Herts NHS Trust, Stevenage, UK
- University of Hertfordshire, Hatfield, UK
| | - Mark De Bono
- Mid and South Essex NHS Foundation Trust, Southend, UK
| | - Nikhil Vasdev
- East and North Herts NHS Trust, Stevenage, UK
- University of Hertfordshire, Hatfield, UK
| | - Nimalan Sanmugalingam
- Cambridge University Hospitals NHS Foundation Trust & University of Cambridge, Cambridge, UK
| | - Paul Burn
- Somerset NHS Foundation Trust, Taunton, UK
| | | | - Ramona Woitek
- Research Center for Medical Image Analysis and Artificial Intelligence (MIAAI), Danube Private University, Krems an der Donau, Austria
| | - Richard Hindley
- University of Winchester, Winchester, UK
- Hampshire Hospitals NHS Foundation Trust, Winchester, UK
| | | | - Sophie Squire
- Hampshire Hospitals NHS Foundation Trust, Winchester, UK
| | - Tristan Barrett
- Cambridge University Hospitals NHS Foundation Trust & University of Cambridge, Cambridge, UK
| | | | | | - Anwar R Padhani
- Paul Strickland Scanner Centre, Mount Vernon Hospital, Northwood, UK
| | | | - Aarti Shah
- Hampshire Hospitals NHS Foundation Trust, Winchester, UK
| | - Evis Sala
- Dipartimento Diagnostica per Immagini e Radioterapia Oncologica, Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
- Dipartimento di Scienze Radiologiche ed Ematologiche, Università Cattolica del Sacro Cuore, Rome, Italy
| |
Collapse
|
6
|
Wei C, Liu Z, Zhang Y, Fan L. Enhancing prostate cancer segmentation in bpMRI: Integrating zonal awareness into attention-guided U-Net. Digit Health 2025; 11:20552076251314546. [PMID: 39866889 PMCID: PMC11758924 DOI: 10.1177/20552076251314546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Accepted: 01/03/2025] [Indexed: 01/28/2025] Open
Abstract
Purpose Prostate cancer (PCa) is the second most common cancer in males worldwide, requiring improvements in diagnostic imaging to identify and treat it at an early stage. Bi-parametric magnetic resonance imaging (bpMRI) is recognized as an essential diagnostic technique for PCa, providing shorter acquisition times and cost-effectiveness. Nevertheless, accurate diagnosis using bpMRI images is difficult due to the inconspicuous and diverse characteristics of malignant tumors and the intricate structure of the prostate gland. An automated system is required to assist the medical professionals in accurate and early diagnosis with less effort. Method This study recognizes the impact of zonal features on the advancement of the disease. The aim is to improve the diagnostic performance through a novel automated approach of a two-step mechanism using bpMRI images. First, pretraining a convolutional neural network (CNN)-based attention-guided U-Net model for segmenting the region of interest which is carried out in the prostate zone. Secondly, pretraining the same type of Attention U-Net is performed for lesion segmentation. Results The performance of the pretrained models and training an attention-guided U-Net from the scratch for segmenting tumors on the prostate region is analyzed. The proposed attention-guided U-Net model achieved an area under the curve (AUC) of 0.85 and a dice similarity coefficient value of 0.82, outperforming some other pretrained deep learning models. Conclusion Our approach greatly enhances the identification and categorization of clinically significant PCa by including zonal data. Our approach exhibits exceptional performance in the accurate segmentation of bpMRI images compared to current techniques, as evidenced by thorough validation of a diverse dataset. This research not only enhances the field of medical imaging for oncology but also underscores the potential of deep learning models to progress PCa diagnosis and personalized patient care.
Collapse
Affiliation(s)
- Chao Wei
- Department of Urology, General Hospital of Northern Theater Command, Shenyang, China
| | - Zheng Liu
- Department of Urology, General Hospital of Northern Theater Command, Shenyang, China
- Department of Graduate School, China Medical University, Shenyang, China
| | - Yibo Zhang
- Nanomage Research Institute, Beijing, China
- Gezhi AI Research Institute, Beijing, China
- School of systems and Computing, University of New South Wales, Kensington, Australia
| | - Lianhui Fan
- Department of Urology, General Hospital of Northern Theater Command, Shenyang, China
| |
Collapse
|
7
|
Li W, Zheng B, Shen Q, Shi X, Luo K, Yao Y, Li X, Lv S, Tao J, Wei Q. Adaptive window adjustment with boundary DoU loss for cascade segmentation of anatomy and lesions in prostate cancer using bpMRI. Neural Netw 2025; 181:106831. [PMID: 39481199 DOI: 10.1016/j.neunet.2024.106831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2024] [Revised: 10/05/2024] [Accepted: 10/17/2024] [Indexed: 11/02/2024]
Abstract
Accurate segmentation of prostate anatomy and lesions using biparametric magnetic resonance imaging (bpMRI) is crucial for the diagnosis and treatment of prostate cancer with the aid of artificial intelligence. In prostate bpMRI, different tissues and pathologies are best visualized within specific and narrow ranges for each sequence, which have varying requirements for image window settings. Currently, adjustments to window settings rely on experience, lacking an efficient method for universal automated adjustment. Hence, we propose an Adaptive Window Adjustment (AWA) module capable of adjusting window settings to accommodate different image modalities, sample data, and downstream tasks. Moreover, given the pivotal role that loss functions play in optimizing model performance, we investigate the performance of different loss functions in segmenting prostate anatomy and lesions. Our study validates the superiority of the Boundary Difference over Union (DoU) Loss in these tasks and extends its applicability to 3D medical imaging. Finally, we propose a cascaded segmentation approach tailored for prostate anatomy and lesions. This approach leverages anatomical structure information to enhance lesion segmentation accuracy. Experimental results on the Prostate158, ProstateX, and PI-CAI datasets confirm the effectiveness of the proposed methods. Our code of methods is available at https://github.com/WenHao-L/AWA_BoundaryDoULoss.
Collapse
Affiliation(s)
- Wenhao Li
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control, Guangzhou, 510006, China; Guangdong-Hong Kong Joint Laboratory for Intelligent Decision and Cooperative Control, Guangzhou, 510006, China
| | - Bowen Zheng
- Department of Urology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Quanyou Shen
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control, Guangzhou, 510006, China; Guangdong-Hong Kong Joint Laboratory for Intelligent Decision and Cooperative Control, Guangzhou, 510006, China
| | - Xiaoran Shi
- Department of Urology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Kun Luo
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China
| | - Yuqian Yao
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control, Guangzhou, 510006, China; Guangdong-Hong Kong Joint Laboratory for Intelligent Decision and Cooperative Control, Guangzhou, 510006, China
| | - Xinyan Li
- School of Biomedical and Pharmaceutical Sciences, Guangdong University of Technology, Guangzhou, 510006, China; Guangdong Provincial Laboratory of Chemistry and Fine Chemical Engineering Jieyang Center, Jieyang, 515200, China
| | - Shidong Lv
- Department of Urology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Jie Tao
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control, Guangzhou, 510006, China; Guangdong-Hong Kong Joint Laboratory for Intelligent Decision and Cooperative Control, Guangzhou, 510006, China.
| | - Qiang Wei
- Department of Urology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China; Department of Urology, Guangdong Provincial People's Hospital, Southern Medical University, Guangzhou, 510080, China.
| |
Collapse
|
8
|
Gao Y, Vali M. Combination of Deep and Statistical Features of the Tissue of Pathology Images to Classify and Diagnose the Degree of Malignancy of Prostate Cancer. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01363-9. [PMID: 39663318 DOI: 10.1007/s10278-024-01363-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2024] [Revised: 11/19/2024] [Accepted: 11/29/2024] [Indexed: 12/13/2024]
Abstract
Prostate cancer is one of the most prevalent male-specific diseases, where early and accurate diagnosis is essential for effective treatment and preventing disease progression. Assessing disease severity involves analyzing histological tissue samples, which are graded from 1 (healthy) to 5 (severely malignant) based on pathological features. However, traditional manual grading is labor-intensive and prone to variability. This study addresses the challenge of automating prostate cancer classification by proposing a novel histological grade analysis approach. The method integrates the gray-level co-occurrence matrix (GLCM) for extracting texture features with Haar wavelet modification to enhance feature quality. A convolutional neural network (CNN) is then employed for robust classification. The proposed method was evaluated using statistical and performance metrics, achieving an average accuracy of 97.3%, a precision of 98%, and an AUC of 0.95. These results underscore the effectiveness of the approach in accurately categorizing prostate tissue grades. This study demonstrates the potential of automated classification methods to support pathologists, enhance diagnostic precision, and improve clinical outcomes in prostate cancer care.
Collapse
Affiliation(s)
- Yan Gao
- School of Electrical and Mechanical Engineering, Xuchang University, Xuchang, 461000, Henan, China.
| | - Mahsa Vali
- Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan, 84156-83111, Iran
| |
Collapse
|
9
|
Wu C, Chen Q, Wang H, Guan Y, Mian Z, Huang C, Ruan C, Song Q, Jiang H, Pan J, Li X. A review of deep learning approaches for multimodal image segmentation of liver cancer. J Appl Clin Med Phys 2024; 25:e14540. [PMID: 39374312 PMCID: PMC11633801 DOI: 10.1002/acm2.14540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 05/30/2024] [Accepted: 08/13/2024] [Indexed: 10/09/2024] Open
Abstract
This review examines the recent developments in deep learning (DL) techniques applied to multimodal fusion image segmentation for liver cancer. Hepatocellular carcinoma is a highly dangerous malignant tumor that requires accurate image segmentation for effective treatment and disease monitoring. Multimodal image fusion has the potential to offer more comprehensive information and more precise segmentation, and DL techniques have achieved remarkable progress in this domain. This paper starts with an introduction to liver cancer, then explains the preprocessing and fusion methods for multimodal images, then explores the application of DL methods in this area. Various DL architectures such as convolutional neural networks (CNN) and U-Net are discussed and their benefits in multimodal image fusion segmentation. Furthermore, various evaluation metrics and datasets currently used to measure the performance of segmentation models are reviewed. While reviewing the progress, the challenges of current research, such as data imbalance, model generalization, and model interpretability, are emphasized and future research directions are suggested. The application of DL in multimodal image segmentation for liver cancer is transforming the field of medical imaging and is expected to further enhance the accuracy and efficiency of clinical decision making. This review provides useful insights and guidance for medical practitioners.
Collapse
Affiliation(s)
- Chaopeng Wu
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
| | - Qiyao Chen
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
| | - Haoyu Wang
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
| | - Yu Guan
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
| | - Zhangyang Mian
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
| | - Cong Huang
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
| | - Changli Ruan
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
| | - Qibin Song
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
| | - Hao Jiang
- School of Electronic InformationWuhan UniversityWuhanHubeiChina
| | - Jinghui Pan
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
- School of Electronic InformationWuhan UniversityWuhanHubeiChina
| | - Xiangpan Li
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
| |
Collapse
|
10
|
Mylona E, Zaridis DI, Kalantzopoulos CΝ, Tachos NS, Regge D, Papanikolaou N, Tsiknakis M, Marias K, Fotiadis DI. Optimizing radiomics for prostate cancer diagnosis: feature selection strategies, machine learning classifiers, and MRI sequences. Insights Imaging 2024; 15:265. [PMID: 39495422 PMCID: PMC11535140 DOI: 10.1186/s13244-024-01783-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 06/27/2024] [Indexed: 11/05/2024] Open
Abstract
OBJECTIVES Radiomics-based analyses encompass multiple steps, leading to ambiguity regarding the optimal approaches for enhancing model performance. This study compares the effect of several feature selection methods, machine learning (ML) classifiers, and sources of radiomic features, on models' performance for the diagnosis of clinically significant prostate cancer (csPCa) from bi-parametric MRI. METHODS Two multi-centric datasets, with 465 and 204 patients each, were used to extract 1246 radiomic features per patient and MRI sequence. Ten feature selection methods, such as Boruta, mRMRe, ReliefF, recursive feature elimination (RFE), random forest (RF) variable importance, L1-lasso, etc., four ML classifiers, namely SVM, RF, LASSO, and boosted generalized linear model (GLM), and three sets of radiomics features, derived from T2w images, ADC maps, and their combination, were used to develop predictive models of csPCa. Their performance was evaluated in a nested cross-validation and externally, using seven performance metrics. RESULTS In total, 480 models were developed. In nested cross-validation, the best model combined Boruta with Boosted GLM (AUC = 0.71, F1 = 0.76). In external validation, the best model combined L1-lasso with boosted GLM (AUC = 0.71, F1 = 0.47). Overall, Boruta, RFE, L1-lasso, and RF variable importance were the top-performing feature selection methods, while the choice of ML classifier didn't significantly affect the results. The ADC-derived features showed the highest discriminatory power with T2w-derived features being less informative, while their combination did not lead to improved performance. CONCLUSION The choice of feature selection method and the source of radiomic features have a profound effect on the models' performance for csPCa diagnosis. CRITICAL RELEVANCE STATEMENT This work may guide future radiomic research, paving the way for the development of more effective and reliable radiomic models; not only for advancing prostate cancer diagnostic strategies, but also for informing broader applications of radiomics in different medical contexts. KEY POINTS Radiomics is a growing field that can still be optimized. Feature selection method impacts radiomics models' performance more than ML algorithms. Best feature selection methods: RFE, LASSO, RF, and Boruta. ADC-derived radiomic features yield more robust models compared to T2w-derived radiomic features.
Collapse
Affiliation(s)
- Eugenia Mylona
- Biomedical Research Institute, FORTH, GR 45110, Ioannina, Greece
- Unit of Medical Technology Intelligent Information Systems, University of Ioannina, Ioannina, Greece
| | - Dimitrios I Zaridis
- Biomedical Research Institute, FORTH, GR 45110, Ioannina, Greece
- Unit of Medical Technology Intelligent Information Systems, University of Ioannina, Ioannina, Greece
- Biomedical Engineering Laboratory, School of Electrical & Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Charalampos Ν Kalantzopoulos
- Biomedical Research Institute, FORTH, GR 45110, Ioannina, Greece
- Unit of Medical Technology Intelligent Information Systems, University of Ioannina, Ioannina, Greece
| | - Nikolaos S Tachos
- Biomedical Research Institute, FORTH, GR 45110, Ioannina, Greece
- Unit of Medical Technology Intelligent Information Systems, University of Ioannina, Ioannina, Greece
| | - Daniele Regge
- Department of Radiology, Candiolo Cancer Institute, FPO-IRCCS, Candiolo, Italy
| | | | - Manolis Tsiknakis
- Computational Biomedicine Laboratory, Institute of Computer Science, FORTH, GR 70013, Heraklion, Greece
- Department of Electrical and Computer Engineering, Hellenic Mediterranean University, GR 71004, Heraklion, Greece
| | - Kostas Marias
- Computational Biomedicine Laboratory, Institute of Computer Science, FORTH, GR 70013, Heraklion, Greece
- Department of Electrical and Computer Engineering, Hellenic Mediterranean University, GR 71004, Heraklion, Greece
| | - Dimitrios I Fotiadis
- Biomedical Research Institute, FORTH, GR 45110, Ioannina, Greece.
- Unit of Medical Technology Intelligent Information Systems, University of Ioannina, Ioannina, Greece.
| |
Collapse
|
11
|
Murugesan GK, McCrumb D, Aboian M, Verma T, Soni R, Memon F, Farahani K, Pei L, Wagner U, Fedorov AY, Clunie D, Moore S, Van Oss J. AI-Generated Annotations Dataset for Diverse Cancer Radiology Collections in NCI Image Data Commons. Sci Data 2024; 11:1165. [PMID: 39443503 PMCID: PMC11500357 DOI: 10.1038/s41597-024-03977-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 10/07/2024] [Indexed: 10/25/2024] Open
Abstract
The National Cancer Institute (NCI) Image Data Commons (IDC) offers publicly available cancer radiology collections for cloud computing, crucial for developing advanced imaging tools and algorithms. Despite their potential, these collections are minimally annotated; only 4% of DICOM studies in collections considered in the project had existing segmentation annotations. This project increases the quantity of segmentations in various IDC collections. We produced high-quality, AI-generated imaging annotations dataset of tissues, organs, and/or cancers for 11 distinct IDC image collections. These collections contain images from a variety of modalities, including computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET). The collections cover various body parts, such as the chest, breast, kidneys, prostate, and liver. A portion of the AI annotations were reviewed and corrected by a radiologist to assess the performance of the AI models. Both the AI's and the radiologist's annotations were encoded in conformance to the Digital Imaging and Communications in Medicine (DICOM) standard, allowing for seamless integration into the IDC collections as third-party analysis collections. All the models, images and annotations are publicly accessible.
Collapse
Affiliation(s)
| | | | | | - Tej Verma
- Yale School of Medicine, New Haven, CT, USA
| | | | | | | | - Linmin Pei
- Frederick National Laboratory for Cancer Research, Frederick, MD, USA
| | - Ulrike Wagner
- Frederick National Laboratory for Cancer Research, Frederick, MD, USA
| | - Andrey Y Fedorov
- Brigham and Women's Hospital and Harvard Medical School, Boston, MA, USA
| | | | | | | |
Collapse
|
12
|
D'Anna G, Ugga L, Cuocolo R. The quest for open datasets: all that glitters is not gold. Eur Radiol 2024; 34:5886-5888. [PMID: 38478059 DOI: 10.1007/s00330-024-10682-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 01/23/2024] [Accepted: 02/02/2024] [Indexed: 08/31/2024]
Affiliation(s)
| | - Lorenzo Ugga
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples, Italy
| | - Renato Cuocolo
- Department of Medicine, Surgery and Dentistry, University of Salerno, Via Salvador Allende 43, 84081, Baronissi, Italy.
| |
Collapse
|
13
|
Zaridis DI, Mylona E, Tsiknakis N, Tachos NS, Matsopoulos GK, Marias K, Tsiknakis M, Fotiadis DI. ProLesA-Net: A multi-channel 3D architecture for prostate MRI lesion segmentation with multi-scale channel and spatial attentions. PATTERNS (NEW YORK, N.Y.) 2024; 5:100992. [PMID: 39081575 PMCID: PMC11284496 DOI: 10.1016/j.patter.2024.100992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 03/06/2024] [Accepted: 04/17/2024] [Indexed: 08/02/2024]
Abstract
Prostate cancer diagnosis and treatment relies on precise MRI lesion segmentation, a challenge notably for small (<15 mm) and intermediate (15-30 mm) lesions. Our study introduces ProLesA-Net, a multi-channel 3D deep-learning architecture with multi-scale squeeze and excitation and attention gate mechanisms. Tested against six models across two datasets, ProLesA-Net significantly outperformed in key metrics: Dice score increased by 2.2%, and Hausdorff distance and average surface distance improved by 0.5 mm, with recall and precision also undergoing enhancements. Specifically, for lesions under 15 mm, our model showed a notable increase in five key metrics. In summary, ProLesA-Net consistently ranked at the top, demonstrating enhanced performance and stability. This advancement addresses crucial challenges in prostate lesion segmentation, enhancing clinical decision making and expediting treatment processes.
Collapse
Affiliation(s)
- Dimitrios I. Zaridis
- Biomedical Research Institute, FORTH, 45110 Ioannina, Greece
- Biomedical Engineering Laboratory, School of Electrical & Computer Engineering, National Technical University of Athens, 9 Iroon Polytechniou St., 15780 Athens, Greece
- Unit of Medical Technology and Intelligent Information Systems, Department of Materials Science and Engineering, University of Ioannina, 45110 Ioannina, Greece
| | - Eugenia Mylona
- Biomedical Research Institute, FORTH, 45110 Ioannina, Greece
- Unit of Medical Technology and Intelligent Information Systems, Department of Materials Science and Engineering, University of Ioannina, 45110 Ioannina, Greece
| | | | - Nikolaos S. Tachos
- Biomedical Research Institute, FORTH, 45110 Ioannina, Greece
- Unit of Medical Technology and Intelligent Information Systems, Department of Materials Science and Engineering, University of Ioannina, 45110 Ioannina, Greece
| | - George K. Matsopoulos
- Biomedical Engineering Laboratory, School of Electrical & Computer Engineering, National Technical University of Athens, 9 Iroon Polytechniou St., 15780 Athens, Greece
| | - Kostas Marias
- Computational Biomedicine Laboratory, FORTH, Heraklion, Greece
| | | | - Dimitrios I. Fotiadis
- Biomedical Research Institute, FORTH, 45110 Ioannina, Greece
- Unit of Medical Technology and Intelligent Information Systems, Department of Materials Science and Engineering, University of Ioannina, 45110 Ioannina, Greece
| |
Collapse
|
14
|
Zaridis DI, Mylona E, Tachos NS, Kalantzopoulos C, Marias K, Tsiknakis M, Fotiadis DI. Spatial Attention-Enhanced Encoder-Decoder Network for Accurate Segmentation of the Prostate's Transition Zone. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40040093 DOI: 10.1109/embc53108.2024.10781592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Accurate segmentation of the prostate and its substructures consist the most important component for reliable localization and characterization of prostate cancer. In this study a Spatial Attention Residual U-Net (Spatial ResU-Net) deep learning (DL) network is proposed for segmenting the transitional zone of the prostate, by leveraging the learning capacity of spatial attention modules and residual connections. Spatial attention modules efficiently extract features in intra-channel manner and boost the performance of encoder and decoder while residual connections facilitate the information flow within the different network's levels. The proposed model was compared against 8 state-of-the-art DL segmentation models demonstrating a superior performance. The improvement in terms of Sensitivity, Dice Score, Hausdorff distance and Average surface distance was at least 1%, 1%, 0.05 mm and 0.09 mm, respectively.
Collapse
|
15
|
Kilintzis V, Kalokyri V, Kondylakis H, Joshi S, Nikiforaki K, Díaz O, Lekadir K, Tsiknakis M, Marias K. Public data homogenization for AI model development in breast cancer. Eur Radiol Exp 2024; 8:42. [PMID: 38589742 PMCID: PMC11001841 DOI: 10.1186/s41747-024-00442-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 01/22/2024] [Indexed: 04/10/2024] Open
Abstract
BACKGROUND Developing trustworthy artificial intelligence (AI) models for clinical applications requires access to clinical and imaging data cohorts. Reusing of publicly available datasets has the potential to fill this gap. Specifically in the domain of breast cancer, a large archive of publicly accessible medical images along with the corresponding clinical data is available at The Cancer Imaging Archive (TCIA). However, existing datasets cannot be directly used as they are heterogeneous and cannot be effectively filtered for selecting specific image types required to develop AI models. This work focuses on the development of a homogenized dataset in the domain of breast cancer including clinical and imaging data. METHODS Five datasets were acquired from the TCIA and were harmonized. For the clinical data harmonization, a common data model was developed and a repeatable, documented "extract-transform-load" process was defined and executed for their homogenization. Further, Digital Imaging and COmmunications in Medicine (DICOM) information was extracted from magnetic resonance imaging (MRI) data and made accessible and searchable. RESULTS The resulting harmonized dataset includes information about 2,035 subjects with breast cancer. Further, a platform named RV-Cherry-Picker enables search over both the clinical and diagnostic imaging datasets, providing unified access, facilitating the downloading of all study imaging that correspond to specific series' characteristics (e.g., dynamic contrast-enhanced series), and reducing the burden of acquiring the appropriate set of images for the respective AI model scenario. CONCLUSIONS RV-Cherry-Picker provides access to the largest, publicly available, homogenized, imaging/clinical dataset for breast cancer to develop AI models on top. RELEVANCE STATEMENT We present a solution for creating merged public datasets supporting AI model development, using as an example the breast cancer domain and magnetic resonance imaging images. KEY POINTS • The proposed platform allows unified access to the largest, homogenized public imaging dataset for breast cancer. • A methodology for the semantically enriched homogenization of public clinical data is presented. • The platform is able to make a detailed selection of breast MRI data for the development of AI models.
Collapse
Affiliation(s)
- Vassilis Kilintzis
- Institute of Computer Science (ICS), Foundation for Research and Technology - Hellas (FORTH), Heraklion, Crete, Greece.
| | - Varvara Kalokyri
- Institute of Computer Science (ICS), Foundation for Research and Technology - Hellas (FORTH), Heraklion, Crete, Greece
| | - Haridimos Kondylakis
- Institute of Computer Science (ICS), Foundation for Research and Technology - Hellas (FORTH), Heraklion, Crete, Greece
| | - Smriti Joshi
- Barcelona Artificial Intelligence in Medicine Lab, Facultat de Matemàtiques I Informàtica, Universitat de Barcelona, Barcelona, Spain
| | - Katerina Nikiforaki
- Institute of Computer Science (ICS), Foundation for Research and Technology - Hellas (FORTH), Heraklion, Crete, Greece
| | - Oliver Díaz
- Barcelona Artificial Intelligence in Medicine Lab, Facultat de Matemàtiques I Informàtica, Universitat de Barcelona, Barcelona, Spain
| | - Karim Lekadir
- Barcelona Artificial Intelligence in Medicine Lab, Facultat de Matemàtiques I Informàtica, Universitat de Barcelona, Barcelona, Spain
| | - Manolis Tsiknakis
- Institute of Computer Science (ICS), Foundation for Research and Technology - Hellas (FORTH), Heraklion, Crete, Greece
| | - Kostas Marias
- Institute of Computer Science (ICS), Foundation for Research and Technology - Hellas (FORTH), Heraklion, Crete, Greece
| |
Collapse
|
16
|
Wang W, Pan B, Ai Y, Li G, Fu Y, Liu Y. ParaCM-PNet: A CNN-tokenized MLP combined parallel dual pyramid network for prostate and prostate cancer segmentation in MRI. Comput Biol Med 2024; 170:107999. [PMID: 38244470 DOI: 10.1016/j.compbiomed.2024.107999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 12/28/2023] [Accepted: 01/13/2024] [Indexed: 01/22/2024]
Abstract
The precise prostate gland and prostate cancer (PCa) segmentations enable the fusion of magnetic resonance imaging (MRI) and ultrasound imaging (US) to guide robotic prostate biopsy systems. This precise segmentation, applied to preoperative MRI images, is crucial for accurate image registration and automatic localization of the biopsy target. Nevertheless, describing local prostate lesions in MRI remains a challenging and time-consuming task, even for experienced physicians. Therefore, this research work develops a parallel dual-pyramid network that combines convolutional neural networks (CNN) and tokenized multi-layer perceptron (MLP) for automatic segmentation of the prostate gland and clinically significant PCa (csPCa) in MRI. The proposed network consists of two stages. The first stage focuses on prostate segmentation, while the second stage uses a prior partition from a previous stage to detect the cancerous regions. Both stages share a similar network architecture, combining CNN and tokenized MLP as the feature extraction backbone to creating a pyramid-structured network for feature encoding and decoding. By employing CNN layers of different scales, the network generates scale-aware local semantic features, which are integrated into feature maps and inputted into an MLP layer from a global perspective. This facilitates the complementarity between local and global information, capturing richer semantic features. Additionally, the network incorporates an interactive hybrid attention module to enhance the perception of the target area. Experimental results demonstrate the superiority of the proposed network over other state-of-the-art image segmentation methods for segmenting the prostate gland and csPCa tissue in MRI images.
Collapse
Affiliation(s)
- Weirong Wang
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, 150001, China
| | - Bo Pan
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, 150001, China
| | - Yue Ai
- Hangzhou Wiseking Medical Robot Co., Ltd, Hangzhou, 310000, China
| | - Gonghui Li
- Department of Urology, Sir Run Run Shaw Hospital, Medicine School of Zhejiang University, Hangzhou, 310000, China
| | - Yili Fu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, 150001, China.
| | - Yanjie Liu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, 150001, China
| |
Collapse
|
17
|
Netzer N, Eith C, Bethge O, Hielscher T, Schwab C, Stenzinger A, Gnirs R, Schlemmer HP, Maier-Hein KH, Schimmöller L, Bonekamp D. Application of a validated prostate MRI deep learning system to independent same-vendor multi-institutional data: demonstration of transferability. Eur Radiol 2023; 33:7463-7476. [PMID: 37507610 PMCID: PMC10598076 DOI: 10.1007/s00330-023-09882-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 04/24/2023] [Accepted: 04/27/2023] [Indexed: 07/30/2023]
Abstract
OBJECTIVES To evaluate a fully automatic deep learning system to detect and segment clinically significant prostate cancer (csPCa) on same-vendor prostate MRI from two different institutions not contributing to training of the system. MATERIALS AND METHODS In this retrospective study, a previously bi-institutionally validated deep learning system (UNETM) was applied to bi-parametric prostate MRI data from one external institution (A), a PI-RADS distribution-matched internal cohort (B), and a csPCa stratified subset of single-institution external public challenge data (C). csPCa was defined as ISUP Grade Group ≥ 2 determined from combined targeted and extended systematic MRI/transrectal US-fusion biopsy. Performance of UNETM was evaluated by comparing ROC AUC and specificity at typical PI-RADS sensitivity levels. Lesion-level analysis between UNETM segmentations and radiologist-delineated segmentations was performed using Dice coefficient, free-response operating characteristic (FROC), and weighted alternative (waFROC). The influence of using different diffusion sequences was analyzed in cohort A. RESULTS In 250/250/140 exams in cohorts A/B/C, differences in ROC AUC were insignificant with 0.80 (95% CI: 0.74-0.85)/0.87 (95% CI: 0.83-0.92)/0.82 (95% CI: 0.75-0.89). At sensitivities of 95% and 90%, UNETM achieved specificity of 30%/50% in A, 44%/71% in B, and 43%/49% in C, respectively. Dice coefficient of UNETM and radiologist-delineated lesions was 0.36 in A and 0.49 in B. The waFROC AUC was 0.67 (95% CI: 0.60-0.83) in A and 0.7 (95% CI: 0.64-0.78) in B. UNETM performed marginally better on readout-segmented than on single-shot echo-planar-imaging. CONCLUSION For same-vendor examinations, deep learning provided comparable discrimination of csPCa and non-csPCa lesions and examinations between local and two independent external data sets, demonstrating the applicability of the system to institutions not participating in model training. CLINICAL RELEVANCE STATEMENT A previously bi-institutionally validated fully automatic deep learning system maintained acceptable exam-level diagnostic performance in two independent external data sets, indicating the potential of deploying AI models without retraining or fine-tuning, and corroborating evidence that AI models extract a substantial amount of transferable domain knowledge about MRI-based prostate cancer assessment. KEY POINTS • A previously bi-institutionally validated fully automatic deep learning system maintained acceptable exam-level diagnostic performance in two independent external data sets. • Lesion detection performance and segmentation congruence was similar on the institutional and an external data set, as measured by the weighted alternative FROC AUC and Dice coefficient. • Although the system generalized to two external institutions without re-training, achieving expected sensitivity and specificity levels using the deep learning system requires probability thresholds to be adjusted, underlining the importance of institution-specific calibration and quality control.
Collapse
Affiliation(s)
- Nils Netzer
- Division of Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
- Heidelberg University Medical School, Heidelberg, Germany
| | - Carolin Eith
- Division of Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
- Heidelberg University Medical School, Heidelberg, Germany
| | - Oliver Bethge
- Medical Faculty, Department of Diagnostic and Interventional Radiology, University Dusseldorf, D-40225, Dusseldorf, Germany
| | - Thomas Hielscher
- Division of Biostatistics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Constantin Schwab
- Institute of Pathology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Albrecht Stenzinger
- Institute of Pathology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Regula Gnirs
- Division of Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
| | - Heinz-Peter Schlemmer
- Division of Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
- German Cancer Consortium (DKTK), Heidelberg, Germany
- National Center for Tumor Diseases (NCT) Heidelberg, Heidelberg, Germany
| | - Klaus H Maier-Hein
- National Center for Tumor Diseases (NCT) Heidelberg, Heidelberg, Germany
- Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Lars Schimmöller
- Medical Faculty, Department of Diagnostic and Interventional Radiology, University Dusseldorf, D-40225, Dusseldorf, Germany
| | - David Bonekamp
- Division of Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany.
- Heidelberg University Medical School, Heidelberg, Germany.
- German Cancer Consortium (DKTK), Heidelberg, Germany.
- National Center for Tumor Diseases (NCT) Heidelberg, Heidelberg, Germany.
| |
Collapse
|
18
|
Stanzione A, Cuocolo R. Generalizability of prostate MRI deep learning: does one size fit all data? Eur Radiol 2023; 33:7461-7462. [PMID: 37526670 DOI: 10.1007/s00330-023-09886-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 05/23/2023] [Accepted: 06/11/2023] [Indexed: 08/02/2023]
Affiliation(s)
- Arnaldo Stanzione
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples, Italy
| | - Renato Cuocolo
- Department of Medicine, Surgery and Dentistry, University of Salerno, Via Salvador Allende, 43, 84081, Baronissi, SA, Italy.
| |
Collapse
|
19
|
Meglič J, Sunoqrot MRS, Bathen TF, Elschot M. Label-set impact on deep learning-based prostate segmentation on MRI. Insights Imaging 2023; 14:157. [PMID: 37749333 PMCID: PMC10519913 DOI: 10.1186/s13244-023-01502-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 08/12/2023] [Indexed: 09/27/2023] Open
Abstract
BACKGROUND Prostate segmentation is an essential step in computer-aided detection and diagnosis systems for prostate cancer. Deep learning (DL)-based methods provide good performance for prostate gland and zones segmentation, but little is known about the impact of manual segmentation (that is, label) selection on their performance. In this work, we investigated these effects by obtaining two different expert label-sets for the PROSTATEx I challenge training dataset (n = 198) and using them, in addition to an in-house dataset (n = 233), to assess the effect on segmentation performance. The automatic segmentation method we used was nnU-Net. RESULTS The selection of training/testing label-set had a significant (p < 0.001) impact on model performance. Furthermore, it was found that model performance was significantly (p < 0.001) higher when the model was trained and tested with the same label-set. Moreover, the results showed that agreement between automatic segmentations was significantly (p < 0.0001) higher than agreement between manual segmentations and that the models were able to outperform the human label-sets used to train them. CONCLUSIONS We investigated the impact of label-set selection on the performance of a DL-based prostate segmentation model. We found that the use of different sets of manual prostate gland and zone segmentations has a measurable impact on model performance. Nevertheless, DL-based segmentation appeared to have a greater inter-reader agreement than manual segmentation. More thought should be given to the label-set, with a focus on multicenter manual segmentation and agreement on common procedures. CRITICAL RELEVANCE STATEMENT Label-set selection significantly impacts the performance of a deep learning-based prostate segmentation model. Models using different label-set showed higher agreement than manual segmentations. KEY POINTS • Label-set selection has a significant impact on the performance of automatic segmentation models. • Deep learning-based models demonstrated true learning rather than simply mimicking the label-set. • Automatic segmentation appears to have a greater inter-reader agreement than manual segmentation.
Collapse
Affiliation(s)
- Jakob Meglič
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology - NTNU, 7030, Trondheim, Norway.
- Faculty of Medicine, University of Ljubljana, 1000, Ljubljana, Slovenia.
| | - Mohammed R S Sunoqrot
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology - NTNU, 7030, Trondheim, Norway
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030, Trondheim, Norway
| | - Tone Frost Bathen
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology - NTNU, 7030, Trondheim, Norway
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030, Trondheim, Norway
| | - Mattijs Elschot
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology - NTNU, 7030, Trondheim, Norway.
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030, Trondheim, Norway.
| |
Collapse
|
20
|
Simeth J, Jiang J, Nosov A, Wibmer A, Zelefsky M, Tyagi N, Veeraraghavan H. Deep learning-based dominant index lesion segmentation for MR-guided radiation therapy of prostate cancer. Med Phys 2023; 50:4854-4870. [PMID: 36856092 PMCID: PMC11098147 DOI: 10.1002/mp.16320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 01/11/2023] [Accepted: 01/29/2023] [Indexed: 03/02/2023] Open
Abstract
BACKGROUND Dose escalation radiotherapy enables increased control of prostate cancer (PCa) but requires segmentation of dominant index lesions (DIL). This motivates the development of automated methods for fast, accurate, and consistent segmentation of PCa DIL. PURPOSE To construct and validate a model for deep-learning-based automatic segmentation of PCa DIL defined by Gleason score (GS) ≥3+4 from MR images applied to MR-guided radiation therapy. Validate generalizability of constructed models across scanner and acquisition differences. METHODS Five deep-learning networks were evaluated on apparent diffusion coefficient (ADC) MRI from 500 lesions in 365 patients arising from internal training Dataset 1 (156 lesions in 125 patients, 1.5Tesla GE MR with endorectal coil), testing using Dataset 1 (35 lesions in 26 patients), external ProstateX Dataset 2 (299 lesions in 204 patients, 3Tesla Siemens MR), and internal inter-rater Dataset 3 (10 lesions in 10 patients, 3Tesla Philips MR). The five networks include: multiple resolution residually connected network (MRRN) and MRRN regularized in training with deep supervision implemented into the last convolutional block (MRRN-DS), Unet, Unet++, ResUnet, and fast panoptic segmentation (FPSnet) as well as fast panoptic segmentation with smoothed labels (FPSnet-SL). Models were evaluated by volumetric DIL segmentation accuracy using Dice similarity coefficient (DSC) and the balanced F1 measure of detection accuracy, as a function of lesion aggressiveness and size (Dataset 1 and 2), and accuracy with respect to two-raters (on Dataset 3). Upon acceptance for publication segmentation models will be made available in an open-source GitHub repository. RESULTS In general, MRRN-DS more accurately segmented tumors than other methods on the testing datasets. MRRN-DS significantly outperformed ResUnet in Dataset2 (DSC of 0.54 vs. 0.44, p < 0.001) and the Unet++ in Dataset3 (DSC of 0.45 vs. p = 0.04). FPSnet-SL was similarly accurate as MRRN-DS in Dataset2 (p = 0.30), but MRRN-DS significantly outperformed FPSnet and FPSnet-SL in both Dataset1 (0.60 vs. 0.51 [p = 0.01] and 0.54 [p = 0.049] respectively) and Dataset3 (0.45 vs. 0.06 [p = 0.002] and 0.24 [p = 0.004] respectively). Finally, MRRN-DS produced slightly higher agreement with experienced radiologist than two radiologists in Dataset 3 (DSC of 0.45 vs. 0.41). CONCLUSIONS MRRN-DS was generalizable to different MR testing datasets acquired using different scanners. It produced slightly higher agreement with an experienced radiologist than that between two radiologists. Finally, MRRN-DS more accurately segmented aggressive lesions, which are generally candidates for radiative dose ablation.
Collapse
Affiliation(s)
- Josiah Simeth
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Anton Nosov
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Andreas Wibmer
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Michael Zelefsky
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Neelam Tyagi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
21
|
Karagoz A, Alis D, Seker ME, Zeybel G, Yergin M, Oksuz I, Karaarslan E. Anatomically guided self-adapting deep neural network for clinically significant prostate cancer detection on bi-parametric MRI: a multi-center study. Insights Imaging 2023; 14:110. [PMID: 37337101 DOI: 10.1186/s13244-023-01439-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 04/17/2023] [Indexed: 06/21/2023] Open
Abstract
OBJECTIVE To evaluate the effectiveness of a self-adapting deep network, trained on large-scale bi-parametric MRI data, in detecting clinically significant prostate cancer (csPCa) in external multi-center data from men of diverse demographics; to investigate the advantages of transfer learning. METHODS We used two samples: (i) Publicly available multi-center and multi-vendor Prostate Imaging: Cancer AI (PI-CAI) training data, consisting of 1500 bi-parametric MRI scans, along with its unseen validation and testing samples; (ii) In-house multi-center testing and transfer learning data, comprising 1036 and 200 bi-parametric MRI scans. We trained a self-adapting 3D nnU-Net model using probabilistic prostate masks on the PI-CAI data and evaluated its performance on the hidden validation and testing samples and the in-house data with and without transfer learning. We used the area under the receiver operating characteristic (AUROC) curve to evaluate patient-level performance in detecting csPCa. RESULTS The PI-CAI training data had 425 scans with csPCa, while the in-house testing and fine-tuning data had 288 and 50 scans with csPCa, respectively. The nnU-Net model achieved an AUROC of 0.888 and 0.889 on the hidden validation and testing data. The model performed with an AUROC of 0.886 on the in-house testing data, with a slight decrease in performance to 0.870 using transfer learning. CONCLUSIONS The state-of-the-art deep learning method using prostate masks trained on large-scale bi-parametric MRI data provides high performance in detecting csPCa in internal and external testing data with different characteristics, demonstrating the robustness and generalizability of deep learning within and across datasets. CLINICAL RELEVANCE STATEMENT A self-adapting deep network, utilizing prostate masks and trained on large-scale bi-parametric MRI data, is effective in accurately detecting clinically significant prostate cancer across diverse datasets, highlighting the potential of deep learning methods for improving prostate cancer detection in clinical practice.
Collapse
Affiliation(s)
- Ahmet Karagoz
- Department of Computer Engineering, Istanbul Technical University, Istanbul, Turkey
- Artificial Intelligence and Information Technologies, Hevi AI Health, Istanbul, Turkey
| | - Deniz Alis
- Artificial Intelligence and Information Technologies, Hevi AI Health, Istanbul, Turkey.
- Department of Radiology, School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey.
| | - Mustafa Ege Seker
- School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey
| | - Gokberk Zeybel
- School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey
| | - Mert Yergin
- Artificial Intelligence and Information Technologies, Hevi AI Health, Istanbul, Turkey
| | - Ilkay Oksuz
- Department of Computer Engineering, Istanbul Technical University, Istanbul, Turkey
| | - Ercan Karaarslan
- Department of Radiology, School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey
| |
Collapse
|
22
|
Zaridis DI, Mylona E, Tachos N, Pezoulas VC, Grigoriadis G, Tsiknakis N, Marias K, Tsiknakis M, Fotiadis DI. Region-adaptive magnetic resonance image enhancement for improving CNN-based segmentation of the prostate and prostatic zones. Sci Rep 2023; 13:714. [PMID: 36639671 PMCID: PMC9837765 DOI: 10.1038/s41598-023-27671-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 01/05/2023] [Indexed: 01/14/2023] Open
Abstract
Automatic segmentation of the prostate of and the prostatic zones on MRI remains one of the most compelling research areas. While different image enhancement techniques are emerging as powerful tools for improving the performance of segmentation algorithms, their application still lacks consensus due to contrasting evidence regarding performance improvement and cross-model stability, further hampered by the inability to explain models' predictions. Particularly, for prostate segmentation, the effectiveness of image enhancement on different Convolutional Neural Networks (CNN) remains largely unexplored. The present work introduces a novel image enhancement method, named RACLAHE, to enhance the performance of CNN models for segmenting the prostate's gland and the prostatic zones. The improvement in performance and consistency across five CNN models (U-Net, U-Net++, U-Net3+, ResU-net and USE-NET) is compared against four popular image enhancement methods. Additionally, a methodology is proposed to explain, both quantitatively and qualitatively, the relation between saliency maps and ground truth probability maps. Overall, RACLAHE was the most consistent image enhancement algorithm in terms of performance improvement across CNN models with the mean increase in Dice Score ranging from 3 to 9% for the different prostatic regions, while achieving minimal inter-model variability. The integration of a feature driven methodology to explain the predictions after applying image enhancement methods, enables the development of a concrete, trustworthy automated pipeline for prostate segmentation on MR images.
Collapse
Affiliation(s)
- Dimitrios I Zaridis
- Biomedical Research Institute, Foundation for Research and Technology Hellas (FORTH), Ioannina, Greece
| | - Eugenia Mylona
- Biomedical Research Institute, Foundation for Research and Technology Hellas (FORTH), Ioannina, Greece
| | - Nikolaos Tachos
- Biomedical Research Institute, Foundation for Research and Technology Hellas (FORTH), Ioannina, Greece
| | - Vasileios C Pezoulas
- Unit of Medical Technology and Intelligent Information Systems, Department of Materials Science and Engineering, University of Ioannina, Ioannina, Greece
| | - Grigorios Grigoriadis
- Unit of Medical Technology and Intelligent Information Systems, Department of Materials Science and Engineering, University of Ioannina, Ioannina, Greece
| | - Nikos Tsiknakis
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), Heraklion, Greece
| | - Kostas Marias
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), Heraklion, Greece.,Department of Electrical and Computer Engineering, Hellenic Mediterranean University, Heraklion, Greece
| | - Manolis Tsiknakis
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), Heraklion, Greece.,Department of Electrical and Computer Engineering, Hellenic Mediterranean University, Heraklion, Greece
| | - Dimitrios I Fotiadis
- Biomedical Research Institute, Foundation for Research and Technology Hellas (FORTH), Ioannina, Greece. .,Unit of Medical Technology and Intelligent Information Systems, Department of Materials Science and Engineering, University of Ioannina, Ioannina, Greece.
| |
Collapse
|
23
|
To MNN, Kwak JT. Biparametric MR signal characteristics can predict histopathological measures of prostate cancer. Eur Radiol 2022; 32:8027-8038. [PMID: 35505115 DOI: 10.1007/s00330-022-08808-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 03/17/2022] [Accepted: 04/11/2022] [Indexed: 01/03/2023]
Abstract
OBJECTIVES The aim of this study was to establish a new data-driven metric from MRI signal intensity that can quantify histopathological characteristics of prostate cancer. METHODS This retrospective study was conducted on 488 patients who underwent biparametric MRI (bp-MRI), including T2-weighted imaging (T2W) and apparent diffusion coefficient (ADC) of diffusion-weighted imaging, and having biopsy-proven prostate cancer between August 2011 and July 2015. Forty-two of the patients who underwent radical prostatectomy and the rest of 446 patients constitute the labeled and unlabeled datasets, respectively. A deep learning model was built to predict the density of epithelium, epithelial nuclei, stroma, and lumen from bp-MRI, called MR-driven tissue density. On both the labeled validation set and the whole unlabeled dataset, the quality of MR-driven tissue density and its relation to bp-MRI signal intensity were examined with respect to different histopathologic and radiologic conditions using different statistical analyses. RESULTS MR-driven tissue density and bp-MRI of 446 patients were evaluated. MR-driven tissue density was significantly related to bp-MRI (p < 0.05). The relationship was generally stronger in cancer regions than in benign regions. Regarding cancer grades, significant differences were found in the intensity of bp-MRI and MR-driven tissue density of epithelium, epithelial nuclei, and stroma (p < 0.05). Comparing MR true-negative to MR false-positive regions, MR-driven lumen density was significantly different, similar to the intensity of bp-MRI (p < 0.001). CONCLUSIONS MR-driven tissue density could serve as a reliable histopathological measure of the prostate on bp-MRI, leading to an improved understanding of prostate cancer and cancer progression. KEY POINTS • Semi-supervised deep learning enables non-invasive and quantitative histopathology in the prostate from biparametric MRI. • Tissue density derived from biparametric MRI demonstrates similar characteristics to the direct estimation of tissue density from histopathology images. • The analysis of MR-driven tissue density reveals significantly different tissue compositions among different cancer grades as well as between MR-positive and MR-negative benign.
Collapse
Affiliation(s)
- Minh Nguyen Nhat To
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, Canada
| | - Jin Tae Kwak
- School of Electrical Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul, 02841, Korea.
| |
Collapse
|
24
|
Adams LC, Makowski MR, Engel G, Rattunde M, Busch F, Asbach P, Niehues SM, Vinayahalingam S, van Ginneken B, Litjens G, Bressem KK. Dataset of prostate MRI annotated for anatomical zones and cancer. Data Brief 2022; 45:108739. [DOI: 10.1016/j.dib.2022.108739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 11/03/2022] [Accepted: 11/04/2022] [Indexed: 11/11/2022] Open
|
25
|
Stanzione A, Verde F, Cuocolo R, Romeo V, Paolo Mainenti P, Brunetti A, Maurea S. Placenta Accreta Spectrum Disorders and Radiomics: Systematic review and quality appraisal. Eur J Radiol 2022; 155:110497. [PMID: 36030661 DOI: 10.1016/j.ejrad.2022.110497] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Revised: 08/13/2022] [Accepted: 08/18/2022] [Indexed: 12/24/2022]
Abstract
PURPOSE Ultrasound and magnetic resonance imaging are the imaging modalities of choice for placenta accrete spectrum (PAS) disorders assessment. Radiomics could further increase the value of medical images and allow to overcome the limitations linked to their visual assessment. Aim of this systematic review was to identify and appraise the methodological quality of radiomics studies focused PAS disorders applications. METHOD Three online databases (PubMed, Scopus and Web of Science) were searched to identify original research articles on human subjects published in English. For the qualitative synthesis of results, data regarding study design (e.g., retrospective or prospective), purpose, patient population (e.g., sample size), imaging modalities and radiomics pipelines (e.g., segmentation and feature extraction strategy) were collected. The appraisal of methodological quality was performed using the Radiomics Quality Score (RQS). RESULTS 10 articles were finally included and analyzed. All were retrospective and MRI-powered. The majority included more than 100 patients (6/10). Four were prognostic (focused on either the prediction of bleeding volume or the prediction of needed management) while six diagnostic (PAS vs not PAS classification) studies. The median RQS was 8, with maximum and minimum respectively equal to 17/36 and - 6/36. Major methodological concerns were the lack of feature stability to multiple segmentation testing and poor data openness. CONCLUSIONS Radiomics studies focused on PAS disorders showed a heterogeneous methodological quality, overall lower than desirable. Furthermore, many relevant research questions remain unexplored. More robust investigations are needed to foster advancements in the field and possibly clinical translation.
Collapse
Affiliation(s)
- Arnaldo Stanzione
- Department of Advanced Biomedical Sciences, University of Naples Federico II, Naples, Italy
| | - Francesco Verde
- Department of Advanced Biomedical Sciences, University of Naples Federico II, Naples, Italy.
| | - Renato Cuocolo
- Department of Medicine, Surgery and Dentistry, University of Salerno, Baronissi, Italy; Augmented Reality for Health Monitoring Laboratory (ARHeMLab), Department of Electrical Engineering and Information Technology, University of Naples "Federico II", Naples, Italy
| | - Valeria Romeo
- Department of Advanced Biomedical Sciences, University of Naples Federico II, Naples, Italy
| | - Pier Paolo Mainenti
- Institute of Biostructures and Bioimaging of the National Research Council, Naples, Italy
| | - Arturo Brunetti
- Department of Advanced Biomedical Sciences, University of Naples Federico II, Naples, Italy
| | - Simone Maurea
- Department of Advanced Biomedical Sciences, University of Naples Federico II, Naples, Italy
| |
Collapse
|
26
|
Adams LC, Makowski MR, Engel G, Rattunde M, Busch F, Asbach P, Niehues SM, Vinayahalingam S, van Ginneken B, Litjens G, Bressem KK. Prostate158 - An expert-annotated 3T MRI dataset and algorithm for prostate cancer detection. Comput Biol Med 2022; 148:105817. [PMID: 35841780 DOI: 10.1016/j.compbiomed.2022.105817] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 06/12/2022] [Accepted: 07/03/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND The development of deep learning (DL) models for prostate segmentation on magnetic resonance imaging (MRI) depends on expert-annotated data and reliable baselines, which are often not publicly available. This limits both reproducibility and comparability. METHODS Prostate158 consists of 158 expert annotated biparametric 3T prostate MRIs comprising T2w sequences and diffusion-weighted sequences with apparent diffusion coefficient maps. Two U-ResNets trained for segmentation of anatomy (central gland, peripheral zone) and suspicious lesions for prostate cancer (PCa) with a PI-RADS score of ≥4 served as baseline algorithms. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), the Hausdorff distance (HD), and the average surface distance (ASD). The Wilcoxon test with Bonferroni correction was used to evaluate differences in performance. The generalizability of the baseline model was assessed using the open datasets Medical Segmentation Decathlon and PROSTATEx. RESULTS Compared to Reader 1, the models achieved a DSC/HD/ASD of 0.88/18.3/2.2 for the central gland, 0.75/22.8/1.9 for the peripheral zone, and 0.45/36.7/17.4 for PCa. Compared with Reader 2, the DSC/HD/ASD were 0.88/17.5/2.6 for the central gland, 0.73/33.2/1.9 for the peripheral zone, and 0.4/39.5/19.1 for PCa. Interrater agreement measured in DSC/HD/ASD was 0.87/11.1/1.0 for the central gland, 0.75/15.8/0.74 for the peripheral zone, and 0.6/18.8/5.5 for PCa. Segmentation performances on the Medical Segmentation Decathlon and PROSTATEx were 0.82/22.5/3.4; 0.86/18.6/2.5 for the central gland, and 0.64/29.2/4.7; 0.71/26.3/2.2 for the peripheral zone. CONCLUSIONS We provide an openly accessible, expert-annotated 3T dataset of prostate MRI and a reproducible benchmark to foster the development of prostate segmentation algorithms.
Collapse
Affiliation(s)
- Lisa C Adams
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany.
| | - Marcus R Makowski
- Technical University of Munich, Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Ismaninger Str. 22, 81675, Munich, Germany
| | - Günther Engel
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Institute for Diagnostic and Interventional Radiology, Georg-August University, Göttingen, Germany
| | - Maximilian Rattunde
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Felix Busch
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Patrick Asbach
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Stefan M Niehues
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Nijmegen, GA, the Netherlands
| | | | - Geert Litjens
- Radboud University Medical Center, Nijmegen, GA, the Netherlands
| | - Keno K Bressem
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany
| |
Collapse
|
27
|
Meningioma Radiomics: At the Nexus of Imaging, Pathology and Biomolecular Characterization. Cancers (Basel) 2022; 14:cancers14112605. [PMID: 35681585 PMCID: PMC9179263 DOI: 10.3390/cancers14112605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 05/20/2022] [Accepted: 05/23/2022] [Indexed: 12/10/2022] Open
Abstract
Simple Summary Meningiomas are typically benign, common extra-axial tumors of the central nervous system. Routine clinical assessment by radiologists presents some limitations regarding long-term patient outcome prediction and risk stratification. Given the exponential growth of interest in radiomics and artificial intelligence in medical imaging, numerous studies have evaluated the potential of these tools in the setting of meningioma imaging. These were aimed at the development of reliable and reproducible models based on quantitative data. Although several limitations have yet to be overcome for their routine use in clinical practice, their innovative potential is evident. In this review, we present a wide-ranging overview of radiomics and artificial intelligence applications in meningioma imaging. Abstract Meningiomas are the most common extra-axial tumors of the central nervous system (CNS). Even though recurrence is uncommon after surgery and most meningiomas are benign, an aggressive behavior may still be exhibited in some cases. Although the diagnosis can be made by radiologists, typically with magnetic resonance imaging, qualitative analysis has some limitations in regard to outcome prediction and risk stratification. The acquisition of this information could help the referring clinician in the decision-making process and selection of the appropriate treatment. Following the increased attention and potential of radiomics and artificial intelligence in the healthcare domain, including oncological imaging, researchers have investigated their use over the years to overcome the current limitations of imaging. The aim of these new tools is the replacement of subjective and, therefore, potentially variable medical image analysis by more objective quantitative data, using computational algorithms. Although radiomics has not yet fully entered clinical practice, its potential for the detection, diagnostic, and prognostic characterization of tumors is evident. In this review, we present a wide-ranging overview of radiomics and artificial intelligence applications in meningioma imaging.
Collapse
|
28
|
Sushentsev N, Moreira Da Silva N, Yeung M, Barrett T, Sala E, Roberts M, Rundo L. Comparative performance of fully-automated and semi-automated artificial intelligence methods for the detection of clinically significant prostate cancer on MRI: a systematic review. Insights Imaging 2022; 13:59. [PMID: 35347462 PMCID: PMC8960511 DOI: 10.1186/s13244-022-01199-3] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 02/24/2022] [Indexed: 12/12/2022] Open
Abstract
OBJECTIVES We systematically reviewed the current literature evaluating the ability of fully-automated deep learning (DL) and semi-automated traditional machine learning (TML) MRI-based artificial intelligence (AI) methods to differentiate clinically significant prostate cancer (csPCa) from indolent PCa (iPCa) and benign conditions. METHODS We performed a computerised bibliographic search of studies indexed in MEDLINE/PubMed, arXiv, medRxiv, and bioRxiv between 1 January 2016 and 31 July 2021. Two reviewers performed the title/abstract and full-text screening. The remaining papers were screened by four reviewers using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) for DL studies and Radiomics Quality Score (RQS) for TML studies. Papers that fulfilled the pre-defined screening requirements underwent full CLAIM/RQS evaluation alongside the risk of bias assessment using QUADAS-2, both conducted by the same four reviewers. Standard measures of discrimination were extracted for the developed predictive models. RESULTS 17/28 papers (five DL and twelve TML) passed the quality screening and were subject to a full CLAIM/RQS/QUADAS-2 assessment, which revealed a substantial study heterogeneity that precluded us from performing quantitative analysis as part of this review. The mean RQS of TML papers was 11/36, and a total of five papers had a high risk of bias. AUCs of DL and TML papers with low risk of bias ranged between 0.80-0.89 and 0.75-0.88, respectively. CONCLUSION We observed comparable performance of the two classes of AI methods and identified a number of common methodological limitations and biases that future studies will need to address to ensure the generalisability of the developed models.
Collapse
Affiliation(s)
- Nikita Sushentsev
- Department of Radiology, University of Cambridge School of Clinical Medicine, Addenbrooke's Hospital and University of Cambridge, Cambridge Biomedical Campus, Box 218, Cambridge, CB2 0QQ, UK.
| | | | - Michael Yeung
- Department of Radiology, University of Cambridge School of Clinical Medicine, Addenbrooke's Hospital and University of Cambridge, Cambridge Biomedical Campus, Box 218, Cambridge, CB2 0QQ, UK
| | - Tristan Barrett
- Department of Radiology, University of Cambridge School of Clinical Medicine, Addenbrooke's Hospital and University of Cambridge, Cambridge Biomedical Campus, Box 218, Cambridge, CB2 0QQ, UK
| | - Evis Sala
- Department of Radiology, University of Cambridge School of Clinical Medicine, Addenbrooke's Hospital and University of Cambridge, Cambridge Biomedical Campus, Box 218, Cambridge, CB2 0QQ, UK
- Lucida Medical Ltd, Biomedical Innovation Hub, University of Cambridge, Cambridge, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK
| | - Michael Roberts
- Department of Applied Mathematics and Theoretical Physics, The Cambridge Mathematics of Information in Healthcare Hub, University of Cambridge, Cambridge, UK
- Oncology R&D, AstraZeneca, Cambridge, UK
| | - Leonardo Rundo
- Department of Radiology, University of Cambridge School of Clinical Medicine, Addenbrooke's Hospital and University of Cambridge, Cambridge Biomedical Campus, Box 218, Cambridge, CB2 0QQ, UK
- Lucida Medical Ltd, Biomedical Innovation Hub, University of Cambridge, Cambridge, UK
- Department of Information and Electrical Engineering and Applied Mathematics (DIEM), University of Salerno, Fisciano, SA, Italy
| |
Collapse
|
29
|
Synthetic correlated diffusion imaging hyperintensity delineates clinically significant prostate cancer. Sci Rep 2022; 12:3376. [PMID: 35232991 PMCID: PMC8888633 DOI: 10.1038/s41598-022-06872-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Accepted: 02/08/2022] [Indexed: 11/08/2022] Open
Abstract
Prostate cancer (PCa) is the second most common cancer in men worldwide and the most frequently diagnosed cancer among men in more developed countries. The prognosis of PCa is excellent if detected at an early stage, making early screening crucial for detection and treatment. In recent years, a new form of diffusion magnetic resonance imaging called correlated diffusion imaging (CDI) was introduced, and preliminary results show promise as a screening tool for PCa. In the largest study of its kind, we investigate the relationship between PCa presence and a new variant of CDI we term synthetic correlated diffusion imaging (CDI\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$^s$$\end{document}s), as well as its performance for PCa delineation compared to current standard MRI techniques [T2-weighted (T2w) imaging, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging] across a cohort of 200 patient cases. Statistical analyses reveal that hyperintensity in CDI\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$^s$$\end{document}s is a strong indicator of PCa presence and achieves strong delineation of clinically significant cancerous tissue compared to T2w, DWI, and DCE. These results suggest that CDI\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$^s$$\end{document}s hyperintensity may be a powerful biomarker for the presence of PCa, and may have a clinical impact as a diagnostic aid for improving PCa screening.
Collapse
|
30
|
Hamzaoui D, Montagne S, Renard-Penna R, Ayache N, Delingette H. Automatic zonal segmentation of the prostate from 2D and 3D T2-weighted MRI and evaluation for clinical use. J Med Imaging (Bellingham) 2022; 9:024001. [PMID: 35300345 PMCID: PMC8920492 DOI: 10.1117/1.jmi.9.2.024001] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 02/23/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: An accurate zonal segmentation of the prostate is required for prostate cancer (PCa) management with MRI. Approach: The aim of this work is to present UFNet, a deep learning-based method for automatic zonal segmentation of the prostate from T2-weighted (T2w) MRI. It takes into account the image anisotropy, includes both spatial and channelwise attention mechanisms and uses loss functions to enforce prostate partition. The method was applied on a private multicentric three-dimensional T2w MRI dataset and on the public two-dimensional T2w MRI dataset ProstateX. To assess the model performance, the structures segmented by the algorithm on the private dataset were compared with those obtained by seven radiologists of various experience levels. Results: On the private dataset, we obtained a Dice score (DSC) of 93.90 ± 2.85 for the whole gland (WG), 91.00 ± 4.34 for the transition zone (TZ), and 79.08 ± 7.08 for the peripheral zone (PZ). Results were significantly better than other compared networks' ( p - value < 0.05 ). On ProstateX, we obtained a DSC of 90.90 ± 2.94 for WG, 86.84 ± 4.33 for TZ, and 78.40 ± 7.31 for PZ. These results are similar to state-of-the art results and, on the private dataset, are coherent with those obtained by radiologists. Zonal locations and sectorial positions of lesions annotated by radiologists were also preserved. Conclusions: Deep learning-based methods can provide an accurate zonal segmentation of the prostate leading to a consistent zonal location and sectorial position of lesions, and therefore can be used as a helping tool for PCa diagnosis.
Collapse
Affiliation(s)
- Dimitri Hamzaoui
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| | - Sarah Montagne
- Sorbonne Université, Radiology Department, CHU La Pitié Salpétrière/Tenon, Paris, France
| | - Raphaële Renard-Penna
- Sorbonne Université, Radiology Department, CHU La Pitié Salpétrière/Tenon, Paris, France
| | - Nicholas Ayache
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| | - Hervé Delingette
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| |
Collapse
|
31
|
Stanzione A, Galatola R, Cuocolo R, Romeo V, Verde F, Mainenti PP, Brunetti A, Maurea S. Radiomics in Cross-Sectional Adrenal Imaging: A Systematic Review and Quality Assessment Study. Diagnostics (Basel) 2022; 12:578. [PMID: 35328133 PMCID: PMC8947112 DOI: 10.3390/diagnostics12030578] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 02/19/2022] [Accepted: 02/21/2022] [Indexed: 12/22/2022] Open
Abstract
In this study, we aimed to systematically review the current literature on radiomics applied to cross-sectional adrenal imaging and assess its methodological quality. Scopus, PubMed and Web of Science were searched to identify original research articles investigating radiomics applications on cross-sectional adrenal imaging (search end date February 2021). For qualitative synthesis, details regarding study design, aim, sample size and imaging modality were recorded as well as those regarding the radiomics pipeline (e.g., segmentation and feature extraction strategy). The methodological quality of each study was evaluated using the radiomics quality score (RQS). After duplicate removal and selection criteria application, 25 full-text articles were included and evaluated. All were retrospective studies, mostly based on CT images (17/25, 68%), with manual (19/25, 76%) and two-dimensional segmentation (13/25, 52%) being preferred. Machine learning was paired to radiomics in about half of the studies (12/25, 48%). The median total and percentage RQS scores were 2 (interquartile range, IQR = -5-8) and 6% (IQR = 0-22%), respectively. The highest and lowest scores registered were 12/36 (33%) and -5/36 (0%). The most critical issues were the absence of proper feature selection, the lack of appropriate model validation and poor data openness. The methodological quality of radiomics studies on adrenal cross-sectional imaging is heterogeneous and lower than desirable. Efforts toward building higher quality evidence are essential to facilitate the future translation into clinical practice.
Collapse
Affiliation(s)
- Arnaldo Stanzione
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.S.); (R.G.); (V.R.); (F.V.); (A.B.); (S.M.)
| | - Roberta Galatola
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.S.); (R.G.); (V.R.); (F.V.); (A.B.); (S.M.)
| | - Renato Cuocolo
- Department of Clinical Medicine and Surgery, University of Naples “Federico II”, 80131 Naples, Italy
- Interdepartmental Research Center on Management and Innovation in Healthcare-CIRMIS, University of Naples “Federico II”, 80100 Naples, Italy
- Laboratory of Augmented Reality for Health Monitoring (ARHeMLab), Department of Electrical Engineering and Information Technology, University of Naples “Federico II”, 80100 Naples, Italy
| | - Valeria Romeo
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.S.); (R.G.); (V.R.); (F.V.); (A.B.); (S.M.)
| | - Francesco Verde
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.S.); (R.G.); (V.R.); (F.V.); (A.B.); (S.M.)
| | - Pier Paolo Mainenti
- Institute of Biostructures and Bioimaging of the National Research Council, 80131 Naples, Italy;
| | - Arturo Brunetti
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.S.); (R.G.); (V.R.); (F.V.); (A.B.); (S.M.)
| | - Simone Maurea
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.S.); (R.G.); (V.R.); (F.V.); (A.B.); (S.M.)
| |
Collapse
|
32
|
Radiomics in Cardiovascular Disease Imaging: from Pixels to the Heart of the Problem. CURRENT CARDIOVASCULAR IMAGING REPORTS 2022. [DOI: 10.1007/s12410-022-09563-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Abstract
Purpose of Review
This review of the literature aims to present potential applications of radiomics in cardiovascular radiology and, in particular, in cardiac imaging.
Recent Findings
Radiomics and machine learning represent a technological innovation which may be used to extract and analyze quantitative features from medical images. They aid in detecting hidden pattern in medical data, possibly leading to new insights in pathophysiology of different medical conditions. In the recent literature, radiomics and machine learning have been investigated for numerous potential applications in cardiovascular imaging. They have been proposed to improve image acquisition and reconstruction, for anatomical structure automated segmentation or automated characterization of cardiologic diseases.
Summary
The number of applications for radiomics and machine learning is continuing to rise, even though methodological and implementation issues still limit their use in daily practice. In the long term, they may have a positive impact in patient management.
Collapse
|
33
|
Mehta P, Antonelli M, Singh S, Grondecka N, Johnston EW, Ahmed HU, Emberton M, Punwani S, Ourselin S. AutoProstate: Towards Automated Reporting of Prostate MRI for Prostate Cancer Assessment Using Deep Learning. Cancers (Basel) 2021; 13:6138. [PMID: 34885246 PMCID: PMC8656605 DOI: 10.3390/cancers13236138] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 11/30/2021] [Accepted: 12/03/2021] [Indexed: 11/21/2022] Open
Abstract
Multiparametric magnetic resonance imaging (mpMRI) of the prostate is used by radiologists to identify, score, and stage abnormalities that may correspond to clinically significant prostate cancer (CSPCa). Automatic assessment of prostate mpMRI using artificial intelligence algorithms may facilitate a reduction in missed cancers and unnecessary biopsies, an increase in inter-observer agreement between radiologists, and an improvement in reporting quality. In this work, we introduce AutoProstate, a deep learning-powered framework for automatic MRI-based prostate cancer assessment. AutoProstate comprises of three modules: Zone-Segmenter, CSPCa-Segmenter, and Report-Generator. Zone-Segmenter segments the prostatic zones on T2-weighted imaging, CSPCa-Segmenter detects and segments CSPCa lesions using biparametric MRI, and Report-Generator generates an automatic web-based report containing four sections: Patient Details, Prostate Size and PSA Density, Clinically Significant Lesion Candidates, and Findings Summary. In our experiment, AutoProstate was trained using the publicly available PROSTATEx dataset, and externally validated using the PICTURE dataset. Moreover, the performance of AutoProstate was compared to the performance of an experienced radiologist who prospectively read PICTURE dataset cases. In comparison to the radiologist, AutoProstate showed statistically significant improvements in prostate volume and prostate-specific antigen density estimation. Furthermore, AutoProstate matched the CSPCa lesion detection sensitivity of the radiologist, which is paramount, but produced more false positive detections.
Collapse
Affiliation(s)
- Pritesh Mehta
- Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK
- School of Biomedical Engineering Imaging Sciences, King’s College London, London SE1 7EH, UK; (M.A.); (S.O.)
| | - Michela Antonelli
- School of Biomedical Engineering Imaging Sciences, King’s College London, London SE1 7EH, UK; (M.A.); (S.O.)
| | - Saurabh Singh
- Centre for Medical Imaging, University College London, London WC1E 6BT, UK; (S.S.); (S.P.)
| | - Natalia Grondecka
- Department of Medical Radiology, Medical University of Lublin, 20-059 Lublin, Poland;
| | | | - Hashim U. Ahmed
- Imperial Prostate, Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, London SW7 2AZ, UK;
| | - Mark Emberton
- Division of Surgery and Interventional Science, Faculty of Medical Sciences, University College London, London WC1E 6BT, UK;
| | - Shonit Punwani
- Centre for Medical Imaging, University College London, London WC1E 6BT, UK; (S.S.); (S.P.)
| | - Sébastien Ourselin
- School of Biomedical Engineering Imaging Sciences, King’s College London, London SE1 7EH, UK; (M.A.); (S.O.)
| |
Collapse
|
34
|
Prediction of Prostate Cancer Disease Aggressiveness Using Bi-Parametric Mri Radiomics. Cancers (Basel) 2021; 13:cancers13236065. [PMID: 34885175 PMCID: PMC8657292 DOI: 10.3390/cancers13236065] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 11/24/2021] [Accepted: 11/26/2021] [Indexed: 11/18/2022] Open
Abstract
Simple Summary The use of radiomics has been studied to predict Gleason Score from bi-parametric prostate MRI examinations. However, different combinations of type of input data (whole prostate gland/lesion features), sampling strategy, feature selection method and machine learning algorithm can be used. The impact of such choices was investigated and it was found that features extracted from the whole prostate gland were more stable to segmentation differences and produced better models (higher performance and less overfitting). This result suggests that the areas surrounding the tumour lesions offer relevant information regarding the Gleason Score that is ultimately attributed to that lesion. Abstract Prostate cancer is one of the most prevalent cancers in the male population. Its diagnosis and classification rely on unspecific measures such as PSA levels and DRE, followed by biopsy, where an aggressiveness level is assigned in the form of Gleason Score. Efforts have been made in the past to use radiomics coupled with machine learning to predict prostate cancer aggressiveness from clinical images, showing promising results. Thus, the main goal of this work was to develop supervised machine learning models exploiting radiomic features extracted from bpMRI examinations, to predict biological aggressiveness; 288 classifiers were developed, corresponding to different combinations of pipeline aspects, namely, type of input data, sampling strategy, feature selection method and machine learning algorithm. On a cohort of 281 lesions from 183 patients, it was found that (1) radiomic features extracted from the lesion volume of interest were less stable to segmentation than the equivalent extraction from the whole gland volume of interest; and (2) radiomic features extracted from the whole gland volume of interest produced higher performance and less overfitted classifiers than radiomic features extracted from the lesions volumes of interest. This result suggests that the areas surrounding the tumour lesions offer relevant information regarding the Gleason Score that is ultimately attributed to that lesion.
Collapse
|
35
|
Kalantar R, Lin G, Winfield JM, Messiou C, Lalondrelle S, Blackledge MD, Koh DM. Automatic Segmentation of Pelvic Cancers Using Deep Learning: State-of-the-Art Approaches and Challenges. Diagnostics (Basel) 2021; 11:1964. [PMID: 34829310 PMCID: PMC8625809 DOI: 10.3390/diagnostics11111964] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 10/14/2021] [Accepted: 10/19/2021] [Indexed: 12/18/2022] Open
Abstract
The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations.
Collapse
Affiliation(s)
- Reza Kalantar
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Gigin Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan 333, Taiwan;
| | - Jessica M. Winfield
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Christina Messiou
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Susan Lalondrelle
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Matthew D. Blackledge
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Dow-Mu Koh
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| |
Collapse
|
36
|
Bleker J, Yakar D, van Noort B, Rouw D, de Jong IJ, Dierckx RAJO, Kwee TC, Huisman H. Single-center versus multi-center biparametric MRI radiomics approach for clinically significant peripheral zone prostate cancer. Insights Imaging 2021; 12:150. [PMID: 34674058 PMCID: PMC8531183 DOI: 10.1186/s13244-021-01099-y] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 09/11/2021] [Indexed: 01/06/2023] Open
Abstract
Objectives To investigate a previously developed radiomics-based biparametric magnetic resonance imaging (bpMRI) approach for discrimination of clinically significant peripheral zone prostate cancer (PZ csPCa) using multi-center, multi-vendor (McMv) and single-center, single-vendor (ScSv) datasets.
Methods This study’s starting point was a previously developed ScSv algorithm for PZ csPCa whose performance was demonstrated in a single-center dataset. A McMv dataset was collected, and 262 PZ PCa lesions (9 centers, 2 vendors) were selected to identically develop a multi-center algorithm. The single-center algorithm was then applied to the multi-center dataset (single–multi-validation), and the McMv algorithm was applied to both the multi-center dataset (multi–multi-validation) and the previously used single-center dataset (multi–single-validation). The areas under the curve (AUCs) of the validations were compared using bootstrapping. Results Previously the single–single validation achieved an AUC of 0.82 (95% CI 0.71–0.92), a significant performance reduction of 27.2% compared to the single–multi-validation AUC of 0.59 (95% CI 0.51–0.68). The new multi-center model achieved a multi–multi-validation AUC of 0.75 (95% CI 0.64–0.84). Compared to the multi–single-validation AUC of 0.66 (95% CI 0.56–0.75), the performance did not decrease significantly (p value: 0.114). Bootstrapped comparison showed similar single-center performances and a significantly different multi-center performance (p values: 0.03, 0.012). Conclusions A single-center trained radiomics-based bpMRI model does not generalize to multi-center data. Multi-center trained radiomics-based bpMRI models do generalize, have equal single-center performance and perform better on multi-center data. Supplementary Information The online version contains supplementary material available at 10.1186/s13244-021-01099-y.
Collapse
Affiliation(s)
- Jeroen Bleker
- Departments of Radiology, Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9700 RB, Groningen, The Netherlands. .,, Meditech Building, Room n305, L.J. Zielstraweg 1, 9713 GX, Groningen, The Netherlands.
| | - Derya Yakar
- Departments of Radiology, Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9700 RB, Groningen, The Netherlands
| | - Bram van Noort
- Departments of Radiology, Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9700 RB, Groningen, The Netherlands
| | - Dennis Rouw
- Department of Radiology, Martini Hospital Groningen, Van Swietenplein 1, 9728 NT, Groningen, The Netherlands
| | - Igle Jan de Jong
- Department of Urology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9700 RB, Groningen, The Netherlands
| | - Rudi A J O Dierckx
- Departments of Radiology, Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9700 RB, Groningen, The Netherlands
| | - Thomas C Kwee
- Departments of Radiology, Nuclear Medicine and Molecular Imaging, Medical Imaging Center, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9700 RB, Groningen, The Netherlands
| | - Henkjan Huisman
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
| |
Collapse
|
37
|
A Combined Radiomics and Machine Learning Approach to Distinguish Clinically Significant Prostate Lesions on a Publicly Available MRI Dataset. J Imaging 2021; 7:jimaging7100215. [PMID: 34677301 PMCID: PMC8540196 DOI: 10.3390/jimaging7100215] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 10/01/2021] [Accepted: 10/13/2021] [Indexed: 12/14/2022] Open
Abstract
Although prostate cancer is one of the most common causes of mortality and morbidity in advancing-age males, early diagnosis improves prognosis and modifies the therapy of choice. The aim of this study was the evaluation of a combined radiomics and machine learning approach on a publicly available dataset in order to distinguish a clinically significant from a clinically non-significant prostate lesion. A total of 299 prostate lesions were included in the analysis. A univariate statistical analysis was performed to prove the goodness of the 60 extracted radiomic features in distinguishing prostate lesions. Then, a 10-fold cross-validation was used to train and test some models and the evaluation metrics were calculated; finally, a hold-out was performed and a wrapper feature selection was applied. The employed algorithms were Naïve bayes, K nearest neighbour and some tree-based ones. The tree-based algorithms achieved the highest evaluation metrics, with accuracies over 80%, and area-under-the-curve receiver-operating characteristics below 0.80. Combined machine learning algorithms and radiomics based on clinical, routine, multiparametric, magnetic-resonance imaging were demonstrated to be a useful tool in prostate cancer stratification.
Collapse
|
38
|
Castaldo A, De Lucia DR, Pontillo G, Gatti M, Cocozza S, Ugga L, Cuocolo R. State of the Art in Artificial Intelligence and Radiomics in Hepatocellular Carcinoma. Diagnostics (Basel) 2021; 11:1194. [PMID: 34209197 PMCID: PMC8307071 DOI: 10.3390/diagnostics11071194] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 06/24/2021] [Accepted: 06/24/2021] [Indexed: 12/12/2022] Open
Abstract
The most common liver malignancy is hepatocellular carcinoma (HCC), which is also associated with high mortality. Often HCC develops in a chronic liver disease setting, and early diagnosis as well as accurate screening of high-risk patients is crucial for appropriate and effective management of these patients. While imaging characteristics of HCC are well-defined in the diagnostic phase, challenging cases still occur, and current prognostic and predictive models are limited in their accuracy. Radiomics and machine learning (ML) offer new tools to address these issues and may lead to scientific breakthroughs with the potential to impact clinical practice and improve patient outcomes. In this review, we will present an overview of these technologies in the setting of HCC imaging across different modalities and a range of applications. These include lesion segmentation, diagnosis, prognostic modeling and prediction of treatment response. Finally, limitations preventing clinical application of radiomics and ML at the present time are discussed, together with necessary future developments to bring the field forward and outside of a purely academic endeavor.
Collapse
Affiliation(s)
- Anna Castaldo
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.C.); (D.R.D.L.); (G.P.); (S.C.); (L.U.)
| | - Davide Raffaele De Lucia
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.C.); (D.R.D.L.); (G.P.); (S.C.); (L.U.)
| | - Giuseppe Pontillo
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.C.); (D.R.D.L.); (G.P.); (S.C.); (L.U.)
| | - Marco Gatti
- Radiology Unit, Department of Surgical Sciences, University of Turin, 10124 Turin, Italy;
| | - Sirio Cocozza
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.C.); (D.R.D.L.); (G.P.); (S.C.); (L.U.)
| | - Lorenzo Ugga
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.C.); (D.R.D.L.); (G.P.); (S.C.); (L.U.)
| | - Renato Cuocolo
- Department of Clinical Medicine and Surgery, University of Naples “Federico II”, 80131 Naples, Italy
| |
Collapse
|