1
|
Demircioğlu A. The effect of feature normalization methods in radiomics. Insights Imaging 2024; 15:2. [PMID: 38185786 PMCID: PMC10772134 DOI: 10.1186/s13244-023-01575-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Accepted: 11/25/2023] [Indexed: 01/09/2024] Open
Abstract
OBJECTIVES In radiomics, different feature normalization methods, such as z-Score or Min-Max, are currently utilized, but their specific impact on the model is unclear. We aimed to measure their effect on the predictive performance and the feature selection. METHODS We employed fifteen publicly available radiomics datasets to compare seven normalization methods. Using four feature selection and classifier methods, we used cross-validation to measure the area under the curve (AUC) of the resulting models, the agreement of selected features, and the model calibration. In addition, we assessed whether normalization before cross-validation introduces bias. RESULTS On average, the difference between the normalization methods was relatively small, with a gain of at most + 0.012 in AUC when comparing the z-Score (mean AUC: 0.707 ± 0.102) to no normalization (mean AUC: 0.719 ± 0.107). However, on some datasets, the difference reached + 0.051. The z-Score performed best, while the tanh transformation showed the worst performance and even decreased the overall predictive performance. While quantile transformation performed, on average, slightly worse than the z-Score, it outperformed all other methods on one out of three datasets. The agreement between the features selected by different normalization methods was only mild, reaching at most 62%. Applying the normalization before cross-validation did not introduce significant bias. CONCLUSION The choice of the feature normalization method influenced the predictive performance but depended strongly on the dataset. It strongly impacted the set of selected features. CRITICAL RELEVANCE STATEMENT Feature normalization plays a crucial role in the preprocessing and influences the predictive performance and the selected features, complicating feature interpretation. KEY POINTS • The impact of feature normalization methods on radiomic models was measured. • Normalization methods performed similarly on average, but differed more strongly on some datasets. • Different methods led to different sets of selected features, impeding feature interpretation. • Model calibration was not largely affected by the normalization method.
Collapse
Affiliation(s)
- Aydin Demircioğlu
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstrasse 55, 45147, Essen, Germany.
| |
Collapse
|
2
|
Koyuncu H, Barstuğan M. A New Breakpoint to Classify 3D Voxels in MRI: A Space Transform Strategy with 3t2FTS-v2 and Its Application for ResNet50-Based Categorization of Brain Tumors. Bioengineering (Basel) 2023; 10:629. [PMID: 37370560 DOI: 10.3390/bioengineering10060629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 05/17/2023] [Accepted: 05/19/2023] [Indexed: 06/29/2023] Open
Abstract
Three-dimensional (3D) image analyses are frequently applied to perform classification tasks. Herein, 3D-based machine learning systems are generally used/generated by examining two designs: a 3D-based deep learning model or a 3D-based task-specific framework. However, except for a new approach named 3t2FTS, a promising feature transform operating from 3D to two-dimensional (2D) space has not been efficiently investigated for classification applications in 3D magnetic resonance imaging (3D MRI). In other words, a state-of-the-art feature transform strategy is not available that achieves high accuracy and provides the adaptation of 2D-based deep learning models for 3D MRI-based classification. With this aim, this paper presents a new version of the 3t2FTS approach (3t2FTS-v2) to apply a transfer learning model for tumor categorization of 3D MRI data. For performance evaluation, the BraTS 2017/2018 dataset is handled that involves high-grade glioma (HGG) and low-grade glioma (LGG) samples in four different sequences/phases. 3t2FTS-v2 is proposed to effectively transform the features from 3D to 2D space by using two textural features: first-order statistics (FOS) and gray level run length matrix (GLRLM). In 3t2FTS-v2, normalization analyses are assessed to be different from 3t2FTS to accurately transform the space information apart from the usage of GLRLM features. The ResNet50 architecture is preferred to fulfill the HGG/LGG classification due to its remarkable performance in tumor grading. As a result, for the classification of 3D data, the proposed model achieves a 99.64% accuracy by guiding the literature about the importance of 3t2FTS-v2 that can be utilized not only for tumor grading but also for whole brain tissue-based disease classification.
Collapse
Affiliation(s)
- Hasan Koyuncu
- Electrical & Electronics Engineering Department, Faculty of Engineering and Natural Sciences, Konya Technical University, Konya 42250, Türkiye
| | - Mücahid Barstuğan
- Electrical & Electronics Engineering Department, Faculty of Engineering and Natural Sciences, Konya Technical University, Konya 42250, Türkiye
| |
Collapse
|
3
|
McCague C, Ramlee S, Reinius M, Selby I, Hulse D, Piyatissa P, Bura V, Crispin-Ortuzar M, Sala E, Woitek R. Introduction to radiomics for a clinical audience. Clin Radiol 2023; 78:83-98. [PMID: 36639175 DOI: 10.1016/j.crad.2022.08.149] [Citation(s) in RCA: 40] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 08/31/2022] [Indexed: 01/12/2023]
Abstract
Radiomics is a rapidly developing field of research focused on the extraction of quantitative features from medical images, thus converting these digital images into minable, high-dimensional data, which offer unique biological information that can enhance our understanding of disease processes and provide clinical decision support. To date, most radiomics research has been focused on oncological applications; however, it is increasingly being used in a raft of other diseases. This review gives an overview of radiomics for a clinical audience, including the radiomics pipeline and the common pitfalls associated with each stage. Key studies in oncology are presented with a focus on both those that use radiomics analysis alone and those that integrate its use with other multimodal data streams. Importantly, clinical applications outside oncology are also presented. Finally, we conclude by offering a vision for radiomics research in the future, including how it might impact our practice as radiologists.
Collapse
Affiliation(s)
- C McCague
- Department of Radiology, University of Cambridge, Cambridge, UK; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK; Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge, UK; Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK.
| | - S Ramlee
- Department of Radiology, University of Cambridge, Cambridge, UK
| | - M Reinius
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK; Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge, UK; Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| | - I Selby
- Department of Radiology, University of Cambridge, Cambridge, UK; Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| | - D Hulse
- Department of Radiology, University of Cambridge, Cambridge, UK; Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| | - P Piyatissa
- Department of Radiology, University of Cambridge, Cambridge, UK
| | - V Bura
- Department of Radiology, University of Cambridge, Cambridge, UK; Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK; Department of Radiology and Medical Imaging, County Clinical Emergency Hospital, Cluj-Napoca, Romania
| | - M Crispin-Ortuzar
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK; Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge, UK; Department of Oncology, University of Cambridge, Cambridge, UK
| | - E Sala
- Department of Radiology, University of Cambridge, Cambridge, UK; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK; Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| | - R Woitek
- Department of Radiology, University of Cambridge, Cambridge, UK; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK; Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK; Research Centre for Medical Image Analysis and Artificial Intelligence (MIAAI), Department of Medicine, Faculty of Medicine and Dentistry, Danube Private University, Krems, Austria
| |
Collapse
|
4
|
El Gannour O, Hamida S, Cherradi B, Al-Sarem M, Raihani A, Saeed F, Hadwan M. Concatenation of Pre-Trained Convolutional Neural Networks for Enhanced COVID-19 Screening Using Transfer Learning Technique. ELECTRONICS 2021; 11:103. [DOI: 10.3390/electronics11010103] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Coronavirus (COVID-19) is the most prevalent coronavirus infection with respiratory symptoms such as fever, cough, dyspnea, pneumonia, and weariness being typical in the early stages. On the other hand, COVID-19 has a direct impact on the circulatory and respiratory systems as it causes a failure to some human organs or severe respiratory distress in extreme circumstances. Early diagnosis of COVID-19 is extremely important for the medical community to limit its spread. For a large number of suspected cases, manual diagnostic methods based on the analysis of chest images are insufficient. Faced with this situation, artificial intelligence (AI) techniques have shown great potential in automatic diagnostic tasks. This paper aims at proposing a fast and precise medical diagnosis support system (MDSS) that can distinguish COVID-19 precisely in chest-X-ray images. This MDSS uses a concatenation technique that aims to combine pre-trained convolutional neural networks (CNN) depend on the transfer learning (TL) technique to build a highly accurate model. The models enable storage and application of knowledge learned from a pre-trained CNN to a new task, viz., COVID-19 case detection. For this purpose, we employed the concatenation method to aggregate the performances of numerous pre-trained models to confirm the reliability of the proposed method for identifying the patients with COVID-19 disease from X-ray images. The proposed system was trialed on a dataset that included four classes: normal, viral-pneumonia, tuberculosis, and COVID-19 cases. Various general evaluation methods were used to evaluate the effectiveness of the proposed model. The first proposed model achieved an accuracy rate of 99.80% while the second model reached an accuracy of 99.71%.
Collapse
Affiliation(s)
- Oussama El Gannour
- Electrical Engineering and Intelligent Systems (EEIS) Laboratory, ENSET of Mohammedia, Hassan II University of Casablanca, B.P. 159, Mohammedia 28820, Morocco
| | - Soufiane Hamida
- Electrical Engineering and Intelligent Systems (EEIS) Laboratory, ENSET of Mohammedia, Hassan II University of Casablanca, B.P. 159, Mohammedia 28820, Morocco
| | - Bouchaib Cherradi
- Electrical Engineering and Intelligent Systems (EEIS) Laboratory, ENSET of Mohammedia, Hassan II University of Casablanca, B.P. 159, Mohammedia 28820, Morocco
- STIE Team, CRMEF Casablanca-Settat, Provincial Section of El Jadida, El Jadida 24000, Morocco
| | - Mohammed Al-Sarem
- College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
- Department of Computer Science, Saba’a Region University, Marib 0000, Yemen
| | - Abdelhadi Raihani
- Electrical Engineering and Intelligent Systems (EEIS) Laboratory, ENSET of Mohammedia, Hassan II University of Casablanca, B.P. 159, Mohammedia 28820, Morocco
| | - Faisal Saeed
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7XG, UK
| | - Mohammed Hadwan
- Department of Information Technology, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia
- Department of Computer Science, College of Applied Sciences, Taiz University, Taiz 6803, Yemen
| |
Collapse
|