1
|
Bilal A, Alkhathlan A, Kateb FA, Tahir A, Shafiq M, Long H. A quantum-optimized approach for breast cancer detection using SqueezeNet-SVM. Sci Rep 2025; 15:3254. [PMID: 39863687 PMCID: PMC11763032 DOI: 10.1038/s41598-025-86671-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2024] [Accepted: 01/13/2025] [Indexed: 01/27/2025] Open
Abstract
Breast cancer is one of the most aggressive types of cancer, and its early diagnosis is crucial for reducing mortality rates and ensuring timely treatment. Computer-aided diagnosis systems provide automated mammography image processing, interpretation, and grading. However, since the currently existing methods suffer from such issues as overfitting, lack of adaptability, and dependence on massive annotated datasets, the present work introduces a hybrid approach to enhance breast cancer classification accuracy. The proposed Q-BGWO-SQSVM approach utilizes an improved quantum-inspired binary Grey Wolf Optimizer and combines it with SqueezeNet and Support Vector Machines to exhibit sophisticated performance. SqueezeNet's fire modules and complex bypass mechanisms extract distinct features from mammography images. Then, these features are optimized by the Q-BGWO for determining the best SVM parameters. Since the current CAD system is more reliable, accurate, and sensitive, its application is advantageous for healthcare. The proposed Q-BGWO-SQSVM was evaluated using diverse databases: MIAS, INbreast, DDSM, and CBIS-DDSM, analyzing its performance regarding accuracy, sensitivity, specificity, precision, F1 score, and MCC. Notably, on the CBIS-DDSM dataset, the Q-BGWO-SQSVM achieved remarkable results at 99% accuracy, 98% sensitivity, and 100% specificity in 15-fold cross-validation. Finally, it can be observed that the performance of the designed Q-BGWO-SQSVM model is excellent, and its potential realization in other datasets and imaging conditions is promising. The novel Q-BGWO-SQSVM model outperforms the state-of-the-art classification methods and offers accurate and reliable early breast cancer detection, which is essential for further healthcare development.
Collapse
Affiliation(s)
- Anas Bilal
- College of Information Science and Technology, Hainan Normal University, Haikou, 571158, China
- Key Laboratory of Data Science and Smart Education, Ministry of Education, Hainan Normal University, Haikou, 571158, China
| | - Ali Alkhathlan
- Department of Computer Science, Faculty of Computing and Information Technology, King AbdulAziz University, Jeddah, Saudi Arabia
| | - Faris A Kateb
- Department of Information Technology, Faculty of Computing and Information Technology, King AbdulAziz University, Jeddah, Saudi Arabia
| | - Alishba Tahir
- Shifa College of Medicine, Shifa Tamere Milat University, Islamabad, Pakistan
| | - Muhammad Shafiq
- School of Information Engineering, Qujing Normal University, Yunnan, China
| | - Haixia Long
- College of Information Science and Technology, Hainan Normal University, Haikou, 571158, China.
- Key Laboratory of Data Science and Smart Education, Ministry of Education, Hainan Normal University, Haikou, 571158, China.
| |
Collapse
|
2
|
Amini M, Salimi Y, Hajianfar G, Mainta I, Hervier E, Sanaat A, Rahmim A, Shiri I, Zaidi H. Fully Automated Region-Specific Human-Perceptive-Equivalent Image Quality Assessment: Application to 18 F-FDG PET Scans. Clin Nucl Med 2024; 49:1079-1090. [PMID: 39466652 DOI: 10.1097/rlu.0000000000005526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/30/2024]
Abstract
INTRODUCTION We propose a fully automated framework to conduct a region-wise image quality assessment (IQA) on whole-body 18 F-FDG PET scans. This framework (1) can be valuable in daily clinical image acquisition procedures to instantly recognize low-quality scans for potential rescanning and/or image reconstruction, and (2) can make a significant impact in dataset collection for the development of artificial intelligence-driven 18 F-FDG PET analysis models by rejecting low-quality images and those presenting with artifacts, toward building clean datasets. PATIENTS AND METHODS Two experienced nuclear medicine physicians separately evaluated the quality of 174 18 F-FDG PET images from 87 patients, for each body region, based on a 5-point Likert scale. The body regisons included the following: (1) the head and neck, including the brain, (2) the chest, (3) the chest-abdomen interval (diaphragmatic region), (4) the abdomen, and (5) the pelvis. Intrareader and interreader reproducibility of the quality scores were calculated using 39 randomly selected scans from the dataset. Utilizing a binarized classification, images were dichotomized into low-quality versus high-quality for physician quality scores ≤3 versus >3, respectively. Inputting the 18 F-FDG PET/CT scans, our proposed fully automated framework applies 2 deep learning (DL) models on CT images to perform region identification and whole-body contour extraction (excluding extremities), then classifies PET regions as low and high quality. For classification, 2 mainstream artificial intelligence-driven approaches, including machine learning (ML) from radiomic features and DL, were investigated. All models were trained and evaluated on scores attributed by each physician, and the average of the scores reported. DL and radiomics-ML models were evaluated on the same test dataset. The performance evaluation was carried out on the same test dataset for radiomics-ML and DL models using the area under the curve, accuracy, sensitivity, and specificity and compared using the Delong test with P values <0.05 regarded as statistically significant. RESULTS In the head and neck, chest, chest-abdomen interval, abdomen, and pelvis regions, the best models achieved area under the curve, accuracy, sensitivity, and specificity of [0.97, 0.95, 0.96, and 0.95], [0.85, 0.82, 0.87, and 0.76], [0.83, 0.76, 0.68, and 0.80], [0.73, 0.72, 0.64, and 0.77], and [0.72, 0.68, 0.70, and 0.67], respectively. In all regions, models revealed highest performance, when developed on the quality scores with higher intrareader reproducibility. Comparison of DL and radiomics-ML models did not show any statistically significant differences, though DL models showed overall improved trends. CONCLUSIONS We developed a fully automated and human-perceptive equivalent model to conduct region-wise IQA over 18 F-FDG PET images. Our analysis emphasizes the necessity of developing separate models for body regions and performing data annotation based on multiple experts' consensus in IQA studies.
Collapse
Affiliation(s)
- Mehdi Amini
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Yazdan Salimi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ghasem Hajianfar
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ismini Mainta
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Elsa Hervier
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Amirhossein Sanaat
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | | - Isaac Shiri
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | |
Collapse
|
3
|
Wang Q, Han X, Song L, Zhang X, Zhang B, Gu Z, Jiang B, Li C, Li X, Yu Y. Automatic quality assessment of knee radiographs using knowledge graphs and convolutional neural networks. Med Phys 2024; 51:7464-7478. [PMID: 39016559 DOI: 10.1002/mp.17316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 07/05/2024] [Accepted: 07/05/2024] [Indexed: 07/18/2024] Open
Abstract
BACKGROUND X-ray radiography is a widely used imaging technique worldwide, and its image quality directly affects diagnostic accuracy. Therefore, X-ray image quality control (QC) is essential. However, subjectively assessing image quality is inefficient and inconsistent, especially when large amounts of image data are being evaluated. Thus, subjective assessment cannot meet current QC needs. PURPOSE To meet current QC needs and improve the efficiency of image quality assessment, a complete set of quality assessment criteria must be established and implemented using artificial intelligence (AI) technology. Therefore, we proposed a multi-criteria AI system for automatically assessing the image quality of knee radiographs. METHODS A knee radiograph QC knowledge graph containing 16 "acquisition technique" labels representing 16 image quality defects and five "clarity" labels representing five grades of clarity were developed. Ten radiographic technologists conducted three rounds of QC based on this graph. The single-person QC results were denoted as QC1 and QC2, and the multi-person QC results were denoted as QC3. Each technologist labeled each image only once. The ResNet model structure was then used to simultaneously perform classification (detection of image quality defects) and regression (output of a clarity score) tasks to construct an image QC system. The QC3 results, comprising 4324 anteroposterior and lateral knee radiographs, were used for model training (70% of the images), validation (10%), and testing (20%). The 865 test set data were used to evaluate the effectiveness of the AI model, and an AI QC result, QC4, was automatically generated by the model after training. Finally, using a double-blind method, the senior QC expert reviewed the final QC results of the test set with reference to the results QC3 and QC4 and used them as a reference standard to evaluate the performance of the model. The precision and mean absolute error (MAE) were used to evaluate the quality of all the labels in relation to the reference standard. RESULTS For the 16 "acquisition technique" features, QC4 exhibited the highest weighted average precision (98.42% ± 0.81%), followed by QC3 (91.39% ± 1.35%), QC2 (87.84% ± 1.68%), and QC1 (87.35% ± 1.71%). For the image clarity features, the MAEs between QC1, QC2, QC3, and QC4 and the reference standard were 0.508 ± 0.021, 0.475 ± 0.019, 0.237 ± 0.016, and 0.303 ± 0.018, respectively. CONCLUSIONS The experimental results show that our automated quality assessment system performed well in classifying the acquisition technique used for knee radiographs. The image clarity quality evaluation accuracy of the model must be further improved but is generally close to that of radiographic technologists. Intelligent QC methods using knowledge graphs and convolutional neural networks have the potential for clinical applications.
Collapse
Affiliation(s)
- Qian Wang
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xiao Han
- College of Medical Information Engineering, Anhui University of Chinese Medicine, Hefei, China
| | - Liangliang Song
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xin Zhang
- College of Computer Science and Technology, Anhui University, Hefei, China
| | - Biao Zhang
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
- Artificial Intelligence Research Institute, Hefei Comprehensive National Science Center, Hefei, China
| | - Zongyun Gu
- College of Medical Information Engineering, Anhui University of Chinese Medicine, Hefei, China
- Artificial Intelligence Research Institute, Hefei Comprehensive National Science Center, Hefei, China
| | - Bo Jiang
- College of Computer Science and Technology, Anhui University, Hefei, China
| | - Chuanfu Li
- College of Medical Information Engineering, Anhui University of Chinese Medicine, Hefei, China
- Artificial Intelligence Research Institute, Hefei Comprehensive National Science Center, Hefei, China
- Anhui Provincial Imaging Diagnosis Quality Control Center, Anhui Provincial Health Commission, Hefei, China
| | - Xiaohu Li
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
- Anhui Provincial Imaging Diagnosis Quality Control Center, Anhui Provincial Health Commission, Hefei, China
| | - Yongqiang Yu
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
- Anhui Provincial Imaging Diagnosis Quality Control Center, Anhui Provincial Health Commission, Hefei, China
| |
Collapse
|
4
|
Lozano FR, Rojo D, Martínez LC, Ramon C. PSF and MTF from a bar pattern in digital mammography. Biomed Phys Eng Express 2024; 10:045051. [PMID: 38821042 DOI: 10.1088/2057-1976/ad5296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 05/31/2024] [Indexed: 06/02/2024]
Abstract
Background.The MTF has difficulties being determined (according to the provisions of the IEC standards) in the hospital setting due to the lack of resources.Purpose.The objective of this work is to propose a quantitative method for obtaining the point spread function (PSF) and the modulation transfer function (MTF) of a digital mammography system from an image of a bar pattern.Methods.The method is based on the measurement of the contrast transfer function (CTF) of the system over the image of the bar pattern. In addition, a theoretical model for thePSFis proposed, from which the theoreticalCTFof the system is obtained by means of convolution with a square wave (mathematical simulation of the bar pattern). Through an iterative process, the free parameters of thePSFmodel are varied until the experimentalCTFcoincides with the one calculated by convolution. Once thePSFof the system is obtained, we calculate theMTFby means of its Fourier transform. TheMTFcalculated from the modelPSFhave been compared with those calculated from an image of a 65μm diameter gold wire using an oversampling process.Results.TheCTFhas been calculated for three digital mammographic systems (DMS 1, DMS 2 and DMS 3), no differences of more than 5 % were found with the CTF obtained with the PSF model. The comparison of theMTFshows us the goodness of thePSFmodel.Conclusions.The proposed method for obtainingPSFandMTFis a simple and accessible method, which does not require a complex configuration or the use of phantoms that are difficult to access in the hospital world. In addition, it can be used to calculate other magnitudes of interest such as the normalized noise power spectrum (NNPS) and the detection quantum efficiency (DQE).
Collapse
Affiliation(s)
- F R Lozano
- Hospital Universitario 12 de Octubre, Madrid, Spain
| | - Daniel Rojo
- Hospital Universitario 12 de Octubre, Madrid, Spain
| | - L C Martínez
- Hospital Universitario 12 de Octubre, Madrid, Spain
| | | |
Collapse
|
5
|
Bilal A, Imran A, Baig TI, Liu X, Abouel Nasr E, Long H. Breast cancer diagnosis using support vector machine optimized by improved quantum inspired grey wolf optimization. Sci Rep 2024; 14:10714. [PMID: 38730250 PMCID: PMC11087531 DOI: 10.1038/s41598-024-61322-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 05/03/2024] [Indexed: 05/12/2024] Open
Abstract
A prompt diagnosis of breast cancer in its earliest phases is necessary for effective treatment. While Computer-Aided Diagnosis systems play a crucial role in automated mammography image processing, interpretation, grading, and early detection of breast cancer, existing approaches face limitations in achieving optimal accuracy. This study addresses these limitations by hybridizing the improved quantum-inspired binary Grey Wolf Optimizer with the Support Vector Machines Radial Basis Function Kernel. This hybrid approach aims to enhance the accuracy of breast cancer classification by determining the optimal Support Vector Machine parameters. The motivation for this hybridization lies in the need for improved classification performance compared to existing optimizers such as Particle Swarm Optimization and Genetic Algorithm. Evaluate the efficacy of the proposed IQI-BGWO-SVM approach on the MIAS dataset, considering various metric parameters, including accuracy, sensitivity, and specificity. Furthermore, the application of IQI-BGWO-SVM for feature selection will be explored, and the results will be compared. Experimental findings demonstrate that the suggested IQI-BGWO-SVM technique outperforms state-of-the-art classification methods on the MIAS dataset, with a resulting mean accuracy, sensitivity, and specificity of 99.25%, 98.96%, and 100%, respectively, using a tenfold cross-validation datasets partition.
Collapse
Affiliation(s)
- Anas Bilal
- College of Information Science and Technology, Hainan Normal University, Haikou, 571158, China
- Key Laboratory of Data Science and Smart Education, Ministry of Education, Hainan Normal University, Haikou, 571158, China
| | - Azhar Imran
- Department of Creative Technologies, Air University, Islamabad, 44000, Pakistan
| | - Talha Imtiaz Baig
- School of Life Science and Technology, University of Electronic Science and Technology of China UESTC, Chengdu, Sichuan, China
| | - Xiaowen Liu
- College of Information Science and Technology, Hainan Normal University, Haikou, 571158, China
| | - Emad Abouel Nasr
- Industrial Engineering Department, College of Engineering, King Saud University, 11421, Riyadh, Saudi Arabia
| | - Haixia Long
- College of Information Science and Technology, Hainan Normal University, Haikou, 571158, China.
- Key Laboratory of Data Science and Smart Education, Ministry of Education, Hainan Normal University, Haikou, 571158, China.
| |
Collapse
|
6
|
Angelone F, Ponsiglione AM, Grassi R, Amato F, Sansone M. A general framework for the assessment of scatter correction techniques in digital mammography. Biomed Signal Process Control 2024; 89:105802. [DOI: 10.1016/j.bspc.2023.105802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/24/2025]
|
7
|
Nassar J, Rizk C, Fares G, Tohme C, Braidy C, Farah J. Clinical image quality assessment and mean glandular dose for full field digital mammography. JOURNAL OF RADIOLOGICAL PROTECTION : OFFICIAL JOURNAL OF THE SOCIETY FOR RADIOLOGICAL PROTECTION 2024; 44:011503. [PMID: 38194904 DOI: 10.1088/1361-6498/ad1cd4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 01/09/2024] [Indexed: 01/11/2024]
Abstract
This study aims to assess the image quality (IQ) of 12 mammographic units and to identify units with potential optimisation needs. Data for 350 mammography examinations meeting inclusion criteria were collected retrospectively from April 2021 to April 2022. They were categorised based on the medical reports into 10 normal cases, 10 cases displaying calcifications and 10 cases presenting lesions. Two radiologists assessed the IQ of 1400 mammograms, evaluating system performance per Boitaet al's study and positioning performance following European guidelines. To measure agreement between the two radiologists, the Cohen's Kappa coefficient (κ) was computed, quantifying the excess of agreement beyond chance. The visual grading analysis score (VGAS) was computed to compare system and positioning performance assessments across different categories and facilities. Median average glandular dose (AGD) values for cranio caudal and medio lateral oblique views were calculated for each category and facility and compared to the national diagnostic reference levels. The health facilities were categorised by considering both IQ VGAS and AGD levels. Inter-rater agreement between radiologists ranged from poor (κ< 0.20) to moderate (0.41 <κ< 0.60), likely influenced by inherent biases and distinct IQ expectations. 50% of the facilities were classified as needing corrective actions for their system performance as they had IQ or high AGD that could increase recall rate and radiation risk and 50% of the health facilities exhibited insufficient positioning performance that could mask tumour masses and microcalcifications. The study's findings emphasise the importance of implementing quality assurance programs to ensure optimal IQ for accurate diagnoses while adhering to radiation exposure guidelines. Additionally, comprehensive training for technologists is essential to address positioning challenges. These initiatives collectively aim to enhance the overall quality of breast imaging services, contributing to improved patient care.
Collapse
Affiliation(s)
- Joyce Nassar
- Faculty of Sciences, Saint-Joseph University, PO Box 11-514, Riad El Solh, Beirut 1107 2050, Lebanon
| | - Chadia Rizk
- Faculty of Sciences, Saint-Joseph University, PO Box 11-514, Riad El Solh, Beirut 1107 2050, Lebanon
- Lebanese Atomic Energy Commission, National Council for Scientific Research, 11-8281 Beirut, Lebanon
| | - Georges Fares
- Faculty of Sciences, Saint-Joseph University, PO Box 11-514, Riad El Solh, Beirut 1107 2050, Lebanon
| | - Carla Tohme
- Radiology Department, Hôtel-Dieu de France Hospital, PO Box 166830, Beirut, Lebanon
| | - Chady Braidy
- Radiology Department, Hôtel-Dieu de France Hospital, PO Box 166830, Beirut, Lebanon
| | - Jad Farah
- Vision RT Ltd, Dove House, Arcadia Ave, Finchley, London N3 2JU, United Kingdom
| |
Collapse
|
8
|
Shankari N, Kudva V, Hegde RB. Breast Mass Detection and Classification Using Machine Learning Approaches on Two-Dimensional Mammogram: A Review. Crit Rev Biomed Eng 2024; 52:41-60. [PMID: 38780105 DOI: 10.1615/critrevbiomedeng.2024051166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
Breast cancer is a leading cause of mortality among women, both in India and globally. The prevalence of breast masses is notably common in women aged 20 to 60. These breast masses are classified, according to the breast imaging-reporting and data systems (BI-RADS) standard, into categories such as fibroadenoma, breast cysts, benign, and malignant masses. To aid in the diagnosis of breast disorders, imaging plays a vital role, with mammography being the most widely used modality for detecting breast abnormalities over the years. However, the process of identifying breast diseases through mammograms can be time-consuming, requiring experienced radiologists to review a significant volume of images. Early detection of breast masses is crucial for effective disease management, ultimately reducing mortality rates. To address this challenge, advancements in image processing techniques, specifically utilizing artificial intelligence (AI) and machine learning (ML), have tiled the way for the development of decision support systems. These systems assist radiologists in the accurate identification and classification of breast disorders. This paper presents a review of various studies where diverse machine learning approaches have been applied to digital mammograms. These approaches aim to identify breast masses and classify them into distinct subclasses such as normal, benign and malignant. Additionally, the paper highlights both the advantages and limitations of existing techniques, offering valuable insights for the benefit of future research endeavors in this critical area of medical imaging and breast health.
Collapse
Affiliation(s)
- N Shankari
- NITTE (Deemed to be University), Department of Electronics and Communication Engineering, NMAM Institute of Technology, Nitte 574110, Karnataka, India
| | - Vidya Kudva
- School of Information Sciences, Manipal Academy of Higher Education, Manipal, India -576104; Nitte Mahalinga Adyanthaya Memorial Institute of Technology, Nitte, India - 574110
| | - Roopa B Hegde
- NITTE (Deemed to be University), Department of Electronics and Communication Engineering, NMAM Institute of Technology, Nitte - 574110, Karnataka, India
| |
Collapse
|
9
|
Hejduk P, Sexauer R, Ruppert C, Borkowski K, Unkelbach J, Schmidt N. Automatic and standardized quality assurance of digital mammography and tomosynthesis with deep convolutional neural networks. Insights Imaging 2023; 14:90. [PMID: 37199794 DOI: 10.1186/s13244-023-01396-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 03/06/2023] [Indexed: 05/19/2023] Open
Abstract
OBJECTIVES The aim of this study was to develop and validate a commercially available AI platform for the automatic determination of image quality in mammography and tomosynthesis considering a standardized set of features. MATERIALS AND METHODS In this retrospective study, 11,733 mammograms and synthetic 2D reconstructions from tomosynthesis of 4200 patients from two institutions were analyzed by assessing the presence of seven features which impact image quality in regard to breast positioning. Deep learning was applied to train five dCNN models on features detecting the presence of anatomical landmarks and three dCNN models for localization features. The validity of models was assessed by the calculation of the mean squared error in a test dataset and was compared to the reading by experienced radiologists. RESULTS Accuracies of the dCNN models ranged between 93.0% for the nipple visualization and 98.5% for the depiction of the pectoralis muscle in the CC view. Calculations based on regression models allow for precise measurements of distances and angles of breast positioning on mammograms and synthetic 2D reconstructions from tomosynthesis. All models showed almost perfect agreement compared to human reading with Cohen's kappa scores above 0.9. CONCLUSIONS An AI-based quality assessment system using a dCNN allows for precise, consistent and observer-independent rating of digital mammography and synthetic 2D reconstructions from tomosynthesis. Automation and standardization of quality assessment enable real-time feedback to technicians and radiologists that shall reduce a number of inadequate examinations according to PGMI (Perfect, Good, Moderate, Inadequate) criteria, reduce a number of recalls and provide a dependable training platform for inexperienced technicians.
Collapse
Affiliation(s)
- Patryk Hejduk
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland.
| | - Raphael Sexauer
- Breast Imaging, Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland
| | - Carlotta Ruppert
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland
| | - Karol Borkowski
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland
| | - Jan Unkelbach
- Department of Radiation Oncology, University Hospital Zurich, Zurich, Switzerland
| | - Noemi Schmidt
- Breast Imaging, Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland
| |
Collapse
|
10
|
Qi C, Wang S, Yu H, Zhang Y, Hu P, Tan H, Shi Y, Shi H. An artificial intelligence-driven image quality assessment system for whole-body [ 18F]FDG PET/CT. Eur J Nucl Med Mol Imaging 2023; 50:1318-1328. [PMID: 36529840 DOI: 10.1007/s00259-022-06078-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 12/03/2022] [Indexed: 12/23/2022]
Abstract
PURPOSE Image quality control is a prerequisite for applying PET/CT. This study aimed to develop an artificial intelligence-driven real-time and accurate whole-body [18F]FDG PET/CT image quality assessment system. METHODS This study included 173 patients (age, 59 ± 12 years; 66.3% males) with whole-body [18F]FDG PET/CT imaging. Images of ten patients were used as an educational set. Images of the rest 163 patients were reconstructed to 952 images by simulating several scanning times and randomly split into training (60%, 98 patients, 578 images), validation (20%, 33 patients, 192 images), and test (20%, 32 patients,182 images) sets. Two experienced physicians (R1 and R2) independently assessed the image quality of thorax, abdomen, and pelvis region twice (R1a and b; R2a and b), 1 month apart, using a 5-point Likert scale. Objective image quality metrics were extracted from the mediastinal blood pool, three liver levels, and the bilateral gluteus maximus. The developed convolutional neural networks for image quality assessment (IQA-CNNs) generated the subjective quality scores and objective image metrics. The IQA-CNNs and physicians' performances were compared for localization accuracy, score agreement, and process time. RESULTS The physicians demonstrated good inter- and intra-rater subjective assessment agreement, with kappa coefficients (R1a vs. R2a, R1a vs. R1b, R2a vs. R2b, and R1a vs. R2b) of 0.78, 0.77, 0.76, and 0.80. The IQA-CNNs and R1 or R2 agreed in the subjective assessments, with kappa coefficients of 0.79 and 0.78. IQA-CNNs and R1 or R2 also agreed in their objective image quality assessment (ICC > 0.60). The IQA-CNNs evaluation speed was 200 times faster than the manual assessment. CONCLUSION An automated system for rapid assessment of [18F]FDG PET/CT image quality was developed, showing comparable performance to senior physicians. The system generates a comprehensive and detailed image quality assessment report, including subjective visual scores and objective image metrics for various anatomical regions.
Collapse
Affiliation(s)
- Chi Qi
- Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, Shanghai, China
- Institute of Nuclear Medicine, Fudan University, No. 180 in Fenglin Road, Shanghai, China
- Shanghai Institute of Medical Imaging, Shanghai, China
- Cancer Prevention and Treatment Center, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Shuo Wang
- Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China
| | - Haojun Yu
- Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, Shanghai, China
- Institute of Nuclear Medicine, Fudan University, No. 180 in Fenglin Road, Shanghai, China
- Shanghai Institute of Medical Imaging, Shanghai, China
- Cancer Prevention and Treatment Center, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Yiqiu Zhang
- Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, Shanghai, China
- Institute of Nuclear Medicine, Fudan University, No. 180 in Fenglin Road, Shanghai, China
- Shanghai Institute of Medical Imaging, Shanghai, China
- Cancer Prevention and Treatment Center, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Pengcheng Hu
- Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, Shanghai, China
- Institute of Nuclear Medicine, Fudan University, No. 180 in Fenglin Road, Shanghai, China
- Shanghai Institute of Medical Imaging, Shanghai, China
- Cancer Prevention and Treatment Center, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Hui Tan
- Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, Shanghai, China
- Institute of Nuclear Medicine, Fudan University, No. 180 in Fenglin Road, Shanghai, China
- Shanghai Institute of Medical Imaging, Shanghai, China
- Cancer Prevention and Treatment Center, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Yonghong Shi
- Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China.
| | - Hongcheng Shi
- Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, Shanghai, China.
- Institute of Nuclear Medicine, Fudan University, No. 180 in Fenglin Road, Shanghai, China.
- Shanghai Institute of Medical Imaging, Shanghai, China.
- Cancer Prevention and Treatment Center, Zhongshan Hospital, Fudan University, Shanghai, China.
| |
Collapse
|
11
|
Alawaji Z, Tavakoli Taba S, Rae W. Automated image quality assessment of mammography phantoms: a systematic review. Acta Radiol 2023; 64:971-986. [PMID: 35866198 DOI: 10.1177/02841851221112856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Computerized image analysis is a viable technique for evaluating image quality as a complement to human observers. PURPOSE To systematically review the image analysis software used in the assessment of 2D image quality using mammography phantoms. MATERIAL AND METHODS A systematic search of multiple databases was performed from inception to July 2020 for articles that incorporated computerized analysis of 2D images of physical mammography phantoms to determine image quality. RESULTS A total of 26 studies were included, 12 were carried out using direct digital imaging and 14 using screen film mammography. The ACR phantom (model-156) was the most frequently evaluated phantom, possibly due to the lack of accepted standard software. In comparison to the inter-observer variations, the computerized image analysis was more consistent in scoring test objects. The template matching method was found to be one of the most reliable algorithms, especially for high-contrast test objects, while several algorithms found low-contrast test objects to be harder to distinguish due to the smaller contrast variations between test objects and their backgrounds. This was particularly true for small object sizes. CONCLUSION Image analysis software was in agreement with human observers but demonstrated higher consistency and reproducibility of quality evaluation. Additionally, using computerized analysis, several quantitative metrics such as contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR) could be used to complement the conventional scoring method. Implementing a computerized approach for monitoring image quality over time would be crucial to detect any deteriorating mammography system before clinical images are impacted.
Collapse
Affiliation(s)
- Zeyad Alawaji
- Discipline of Medical Imaging Science, 522555Faculty of Medicine and Health, 4334The University of Sydney, Sydney, NSW, Australia
- Department of Radiologic Technology, College of Applied Medical Sciences, 158005Qassim University, Buraydah, Saudi Arabia
| | - Seyedamir Tavakoli Taba
- Discipline of Medical Imaging Science, 522555Faculty of Medicine and Health, 4334The University of Sydney, Sydney, NSW, Australia
| | - William Rae
- Discipline of Medical Imaging Science, 522555Faculty of Medicine and Health, 4334The University of Sydney, Sydney, NSW, Australia
- Medical Imaging Department, Prince of Wales Hospital, Randwick, NSW, Australia
| |
Collapse
|
12
|
Anton M, Mäder U, Schopphoven S, Reginatto M. A nonparametric measure of noise in x-ray diagnostic images-mammography. Phys Med Biol 2023; 68. [PMID: 36652714 DOI: 10.1088/1361-6560/acb485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 01/18/2023] [Indexed: 01/20/2023]
Abstract
Objective.In x-ray diagnostics, modern image reconstruction or image processing methods may render established methods of image quality assessment inadequate. Task specific quality assessment by using model observers has the disadvantage of being very labour-intensive. Therefore, it appears highly desirable to develop novel image quality parameters that neither rely on the linearity and the shift-invariace of the imaging system nor require the acquisition of hundreds of images as is necessary for the application of model observers, and which can be derived directly from diagnostic images.Approach.A new measure for the noise based on non-maximum-suppression images is defined and its properties are explored using simulated images before it is applied to an exposure series of mammograms of a homogeneous phantom and a 3D-printed breast phantom to demonstrate its usefulness under realistic conditions.Main results.The new noise parameter cannot only be derived from images with a homogeneous background but it can be extracted directly from images containing anatomic structures and is proportional to the standard deviation of the noise. At present, the applicability is restricted to mammography, which satisfies the assumption of short covariance length of the noise.Significance.The new measure of the noise is but a first step of the development of a set of parameters that are required to quantify image quality directly from diagnostic images without relying on the assumption of a linear, shift-invariant system, e.g. by providing measures of sharpness, contrast and structural complexity, in addition to the noise measure. For mammography, a convenient method is now available to quantify noise in processed diagnostic images.
Collapse
Affiliation(s)
- M Anton
- Physikalisch-Technische Bundesanstalt Braunschweig and Berlin, Germany
| | - U Mäder
- Institute of Medical Physics and Radiation Protection, University of Applied Sciences, Giessen, Germany
| | - S Schopphoven
- Reference Centre for Mammography Screening Southwest Germany, Giessen, Germany
| | - M Reginatto
- Physikalisch-Technische Bundesanstalt Braunschweig and Berlin, Germany
| |
Collapse
|
13
|
Sundell VM, Mäkelä T, Vitikainen AM, Kaasalainen T. Convolutional neural network -based phantom image scoring for mammography quality control. BMC Med Imaging 2022; 22:216. [PMID: 36476319 PMCID: PMC9727908 DOI: 10.1186/s12880-022-00944-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 11/28/2022] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Visual evaluation of phantom images is an important, but time-consuming part of mammography quality control (QC). Consistent scoring of phantom images over the device's lifetime is highly desirable. Recently, convolutional neural networks (CNNs) have been applied to a wide range of image classification problems, performing with a high accuracy. The purpose of this study was to automate mammography QC phantom scoring task by training CNN models to mimic a human reviewer. METHODS Eight CNN variations consisting of three to ten convolutional layers were trained for detecting targets (fibres, microcalcifications and masses) in American College of Radiology (ACR) accreditation phantom images and the results were compared with human scoring. Regular and artificially degraded/improved QC phantom images from eight mammography devices were visually evaluated by one reviewer. These images were used in training the CNN models. A separate test set consisted of daily QC images from the eight devices and separately acquired images with varying dose levels. These were scored by four reviewers and considered the ground truth for CNN performance testing. RESULTS Although hyper-parameter search space was limited, an optimal network depth after which additional layers resulted in decreased accuracy was identified. The highest scoring accuracy (95%) was achieved with the CNN consisting of six convolutional layers. The highest deviation between the CNN and the reviewers was found at lowest dose levels. No significant difference emerged between the visual reviews and CNN results except in case of smallest masses. CONCLUSION A CNN-based automatic mammography QC phantom scoring system can score phantom images in a good agreement with human reviewers, and can therefore be of benefit in mammography QC.
Collapse
Affiliation(s)
- Veli-Matti Sundell
- grid.7737.40000 0004 0410 2071Department of Physics, University of Helsinki, P.O. Box 64, 00014 Helsinki, Finland ,grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Teemu Mäkelä
- grid.7737.40000 0004 0410 2071Department of Physics, University of Helsinki, P.O. Box 64, 00014 Helsinki, Finland ,grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Anne-Mari Vitikainen
- grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Touko Kaasalainen
- grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| |
Collapse
|
14
|
Schmähling F, Martin J, Elster C. A framework for benchmarking uncertainty in deep regression. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03908-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
AbstractWe propose a framework for the assessment of uncertainty quantification in deep regression. The framework is based on regression problems where the regression function is a linear combination of nonlinear functions. Basically, any level of complexity can be realized through the choice of the nonlinear functions and the dimensionality of their domain. Results of an uncertainty quantification for deep regression are compared against those obtained by a statistical reference method. The reference method utilizes knowledge about the underlying nonlinear functions and is based on Bayesian linear regression using a prior reference. The flexibility, together with the availability of a reference solution, makes the framework suitable for defining benchmark sets for uncertainty quantification. Reliability of uncertainty quantification is assessed in terms of coverage probabilities, and accuracy through the size of calculated uncertainties. We illustrate the proposed framework by applying it to current approaches for uncertainty quantification in deep regression. In addition, results for three real-world regression tasks are presented.
Collapse
|
15
|
Amanova N, Martin J, Elster C. Explainability for deep learning in mammography image quality assessment. MACHINE LEARNING: SCIENCE AND TECHNOLOGY 2022. [DOI: 10.1088/2632-2153/ac7a03] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Abstract
The application of deep learning has recently been proposed for the assessment of image quality in mammography. It was demonstrated in a proof-of-principle study that the proposed approach can be more efficient than currently applied automated conventional methods. However, in contrast to conventional methods, the deep learning approach has a black-box nature and, before it can be recommended for the routine use, it must be understood more thoroughly. For this purpose, we propose and apply a new explainability method: the oriented, modified integrated gradients (OMIG) method. The design of this method is inspired by the integrated gradientsmethod but adapted considerably to the use case at hand. To further enhance this method, an upsampling technique is developed that produces high-resolution explainability maps for the downsampled data used by the deep learning approach. Comparison with established explainability methods demonstrates that the proposed approach yields substantially more expressive and informative results for our specific use case. Application of the proposed explainability approach generally confirms the validity of the considered deep learning-based mammography image quality assessment (IQA) method. Specifically, it is demonstrated that the predicted image quality is based on a meaningful mapping that makes successful use of certain geometric structures of the images. In addition, the novel explainability method helps us to identify the parts of the employed phantom that have the largest impact on the predicted image quality, and to shed some light on cases in which the trained neural networks fail to work as expected. While tailored to assess a specific approach from deep learning for mammography IQA, the proposed explainability method could also become relevant in other, similar deep learning applications based on high-dimensional images.
Collapse
|
16
|
Inkinen SI, Mäkelä T, Kaasalainen T, Peltonen J, Kangasniemi M, Kortesniemi M. Automatic head computed tomography image noise quantification with deep learning. Phys Med 2022; 99:102-112. [PMID: 35671678 DOI: 10.1016/j.ejmp.2022.05.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 04/02/2022] [Accepted: 05/25/2022] [Indexed: 10/18/2022] Open
Abstract
PURPOSE Computed tomography (CT) image noise is usually determined by standard deviation (SD) of pixel values from uniform image regions. This study investigates how deep learning (DL) could be applied in head CT image noise estimation. METHODS Two approaches were investigated for noise image estimation of a single acquisition image: direct noise image estimation using supervised DnCNN convolutional neural network (CNN) architecture, and subtraction of a denoised image estimated with denoising UNet-CNN experimented with supervised and unsupervised noise2noise training approaches. Noise was assessed with local SD maps using 3D- and 2D-CNN architectures. Anthropomorphic phantom CT image dataset (N = 9 scans, 3 repetitions) was used for DL-model comparisons. Mean square error (MSE) and mean absolute percentage errors (MAPE) of SD values were determined using the SD values of subtraction images as ground truth. Open-source clinical head CT low-dose dataset (Ntrain = 37, Ntest = 10 subjects) were used to demonstrate DL applicability in noise estimation from manually labeled uniform regions and in automated noise and contrast assessment. RESULTS The direct SD estimation using 3D-CNN was the most accurate assessment method when comparing in phantom dataset (MAPE = 15.5%, MSE = 6.3HU). Unsupervised noise2noise approach provided only slightly inferior results (MAPE = 20.2%, MSE = 13.7HU). 2DCNN and unsupervised UNet models provided the smallest MSE on clinical labeled uniform regions. CONCLUSIONS DL-based clinical image assessment is feasible and provides acceptable accuracy as compared to true image noise. Noise2noise approach may be feasible in clinical use where no ground truth data is available. Noise estimation combined with tissue segmentation may enable more comprehensive image quality characterization.
Collapse
Affiliation(s)
- Satu I Inkinen
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland.
| | - Teemu Mäkelä
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland; Department of Physics, University of Helsinki, P.O. Box 64, FI-00014 Helsinki, Finland
| | - Touko Kaasalainen
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Juha Peltonen
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Marko Kangasniemi
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Mika Kortesniemi
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland
| |
Collapse
|
17
|
Gastounioti A, Desai S, Ahluwalia VS, Conant EF, Kontos D. Artificial intelligence in mammographic phenotyping of breast cancer risk: a narrative review. Breast Cancer Res 2022; 24:14. [PMID: 35184757 PMCID: PMC8859891 DOI: 10.1186/s13058-022-01509-z] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 02/08/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Improved breast cancer risk assessment models are needed to enable personalized screening strategies that achieve better harm-to-benefit ratio based on earlier detection and better breast cancer outcomes than existing screening guidelines. Computational mammographic phenotypes have demonstrated a promising role in breast cancer risk prediction. With the recent exponential growth of computational efficiency, the artificial intelligence (AI) revolution, driven by the introduction of deep learning, has expanded the utility of imaging in predictive models. Consequently, AI-based imaging-derived data has led to some of the most promising tools for precision breast cancer screening. MAIN BODY This review aims to synthesize the current state-of-the-art applications of AI in mammographic phenotyping of breast cancer risk. We discuss the fundamentals of AI and explore the computing advancements that have made AI-based image analysis essential in refining breast cancer risk assessment. Specifically, we discuss the use of data derived from digital mammography as well as digital breast tomosynthesis. Different aspects of breast cancer risk assessment are targeted including (a) robust and reproducible evaluations of breast density, a well-established breast cancer risk factor, (b) assessment of a woman's inherent breast cancer risk, and (c) identification of women who are likely to be diagnosed with breast cancers after a negative or routine screen due to masking or the rapid and aggressive growth of a tumor. Lastly, we discuss AI challenges unique to the computational analysis of mammographic imaging as well as future directions for this promising research field. CONCLUSIONS We provide a useful reference for AI researchers investigating image-based breast cancer risk assessment while indicating key priorities and challenges that, if properly addressed, could accelerate the implementation of AI-assisted risk stratification to future refine and individualize breast cancer screening strategies.
Collapse
Affiliation(s)
- Aimilia Gastounioti
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.,Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Shyam Desai
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Vinayak S Ahluwalia
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.,Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Emily F Conant
- Department of Radiology, Hospital of the University of Pennsylvania, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Despina Kontos
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.
| |
Collapse
|
18
|
Kadri F, Dairi A, Harrou F, Sun Y. Towards accurate prediction of patient length of stay at emergency department: a GAN-driven deep learning framework. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2022; 14:1-15. [PMID: 35132336 PMCID: PMC8810344 DOI: 10.1007/s12652-022-03717-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2021] [Accepted: 01/11/2022] [Indexed: 05/28/2023]
Abstract
Recently, the hospital systems face a high influx of patients generated by several events, such as seasonal flows or health crises related to epidemics (e.g., COVID'19). Despite the extent of the care demands, hospital establishments, particularly emergency departments (EDs), must admit patients for medical treatments. However, the high patient influx often increases patients' length of stay (LOS) and leads to overcrowding problems within the EDs. To mitigate this issue, hospital managers need to predict the patient's LOS, which is an essential indicator for assessing ED overcrowding and the use of the medical resources (allocation, planning, utilization rates). Thus, accurately predicting LOS is necessary to improve ED management. This paper proposes a deep learning-driven approach for predicting the patient LOS in ED using a generative adversarial network (GAN) model. The GAN-driven approach flexibly learns relevant information from linear and nonlinear processes without prior assumptions on data distribution and significantly enhances the prediction accuracy. Furthermore, we classified the predicted patients' LOS according to time spent at the pediatric emergency department (PED) to further help decision-making and prevent overcrowding. The experiments were conducted on actual data obtained from the PED in Lille regional hospital center, France. The GAN model results were compared with other deep learning models, including deep belief networks, convolutional neural network, stacked auto-encoder, and four machine learning models, namely support vector regression, random forests, adaboost, and decision tree. Results testify that deep learning models are suitable for predicting patient LOS and highlight GAN's superior performance than the other models.
Collapse
Affiliation(s)
- Farid Kadri
- Aeroline DATA & CET, Agence 1031, Sopra Steria Group, Colomiers, 31770 France
| | - Abdelkader Dairi
- Laboratoire des Technologies de l’Environnement (LTE), BP 1523, Al M’naouar, 10587 Oran, Algeria
- University of Science and Technology of Oran-Mohamed Boudiaf, USTO-MB, BP 1505, El Mnaouar, Bir El Djir, 10587 Oran, Algeria
| | - Fouzi Harrou
- Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900 Saudi Arabia
| | - Ying Sun
- Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900 Saudi Arabia
| |
Collapse
|
19
|
Bottani S, Burgos N, Maire A, Wild A, Ströer S, Dormont D, Colliot O. Automatic quality control of brain T1-weighted magnetic resonance images for a clinical data warehouse. Med Image Anal 2021; 75:102219. [PMID: 34773767 DOI: 10.1016/j.media.2021.102219] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 08/12/2021] [Accepted: 08/26/2021] [Indexed: 10/20/2022]
Abstract
Many studies on machine learning (ML) for computer-aided diagnosis have so far been mostly restricted to high-quality research data. Clinical data warehouses, gathering routine examinations from hospitals, offer great promises for training and validation of ML models in a realistic setting. However, the use of such clinical data warehouses requires quality control (QC) tools. Visual QC by experts is time-consuming and does not scale to large datasets. In this paper, we propose a convolutional neural network (CNN) for the automatic QC of 3D T1-weighted brain MRI for a large heterogeneous clinical data warehouse. To that purpose, we used the data warehouse of the hospitals of the Greater Paris area (Assistance Publique-Hôpitaux de Paris [AP-HP]). Specifically, the objectives were: 1) to identify images which are not proper T1-weighted brain MRIs; 2) to identify acquisitions for which gadolinium was injected; 3) to rate the overall image quality. We used 5000 images for training and validation and a separate set of 500 images for testing. In order to train/validate the CNN, the data were annotated by two trained raters according to a visual QC protocol that we specifically designed for application in the setting of a data warehouse. For objectives 1 and 2, our approach achieved excellent accuracy (balanced accuracy and F1-score >90%), similar to the human raters. For objective 3, the performance was good but substantially lower than that of human raters. Nevertheless, the automatic approach accurately identified (balanced accuracy and F1-score >80%) low quality images, which would typically need to be excluded. Overall, our approach shall be useful for exploiting hospital data warehouses in medical image computing.
Collapse
Affiliation(s)
- Simona Bottani
- Inria, Aramis project-team, Paris, 75013, France; Sorbonne Université, Paris, 75013, France; Institut du Cerveau - Paris Brain Institute-ICM, Paris, 75013, France; Inserm, Paris, 75013, France; CNRS, Paris, 75013, France; AP-HP, Hôpital de la Pitié Salpêtrière, Paris, 75013, France
| | - Ninon Burgos
- Sorbonne Université, Paris, 75013, France; Institut du Cerveau - Paris Brain Institute-ICM, Paris, 75013, France; Inserm, Paris, 75013, France; CNRS, Paris, 75013, France; AP-HP, Hôpital de la Pitié Salpêtrière, Paris, 75013, France; Inria, Aramis project-team, Paris, 75013, France
| | | | - Adam Wild
- Sorbonne Université, Paris, 75013, France; Institut du Cerveau - Paris Brain Institute-ICM, Paris, 75013, France; Inserm, Paris, 75013, France; CNRS, Paris, 75013, France; AP-HP, Hôpital de la Pitié Salpêtrière, Paris, 75013, France; Inria, Aramis project-team, Paris, 75013, France
| | - Sebastian Ströer
- AP-HP, Hôpital de la Pitié Salpêtrière, Department of Neuroradiology, Paris, 75013, France
| | - Didier Dormont
- Sorbonne Université, Paris, 75013, France; Institut du Cerveau - Paris Brain Institute-ICM, Paris, 75013, France; Inserm, Paris, 75013, France; CNRS, Paris, 75013, France; AP-HP, Hôpital de la Pitié Salpêtrière, Paris, 75013, France; Inria, Aramis project-team, Paris, 75013, France; AP-HP, Hôpital de la Pitié Salpêtrière, Department of Neuroradiology, Paris, 75013, France
| | - Olivier Colliot
- Sorbonne Université, Paris, 75013, France; Institut du Cerveau - Paris Brain Institute-ICM, Paris, 75013, France; Inserm, Paris, 75013, France; CNRS, Paris, 75013, France; AP-HP, Hôpital de la Pitié Salpêtrière, Paris, 75013, France; Inria, Aramis project-team, Paris, 75013, France.
| | | |
Collapse
|