1
|
Sun Z, Silberstein J, Vaccarezza M. Cardiovascular Computed Tomography in the Diagnosis of Cardiovascular Disease: Beyond Lumen Assessment. J Cardiovasc Dev Dis 2024; 11:22. [PMID: 38248892 PMCID: PMC10816599 DOI: 10.3390/jcdd11010022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 01/10/2024] [Accepted: 01/11/2024] [Indexed: 01/23/2024] Open
Abstract
Cardiovascular CT is being widely used in the diagnosis of cardiovascular disease due to the rapid technological advancements in CT scanning techniques. These advancements include the development of multi-slice CT, from early generation to the latest models, which has the capability of acquiring images with high spatial and temporal resolution. The recent emergence of photon-counting CT has further enhanced CT performance in clinical applications, providing improved spatial and contrast resolution. CT-derived fractional flow reserve is superior to standard CT-based anatomical assessment for the detection of lesion-specific myocardial ischemia. CT-derived 3D-printed patient-specific models are also superior to standard CT, offering advantages in terms of educational value, surgical planning, and the simulation of cardiovascular disease treatment, as well as enhancing doctor-patient communication. Three-dimensional visualization tools including virtual reality, augmented reality, and mixed reality are further advancing the clinical value of cardiovascular CT in cardiovascular disease. With the widespread use of artificial intelligence, machine learning, and deep learning in cardiovascular disease, the diagnostic performance of cardiovascular CT has significantly improved, with promising results being presented in terms of both disease diagnosis and prediction. This review article provides an overview of the applications of cardiovascular CT, covering its performance from the perspective of its diagnostic value based on traditional lumen assessment to the identification of vulnerable lesions for the prediction of disease outcomes with the use of these advanced technologies. The limitations and future prospects of these technologies are also discussed.
Collapse
Affiliation(s)
- Zhonghua Sun
- Curtin Medical School, Curtin University, Perth, WA 6102, Australia; (J.S.); (M.V.)
- Curtin Health Innovation Research Institute (CHIRI), Curtin University, Perth, WA 6102, Australia
| | - Jenna Silberstein
- Curtin Medical School, Curtin University, Perth, WA 6102, Australia; (J.S.); (M.V.)
| | - Mauro Vaccarezza
- Curtin Medical School, Curtin University, Perth, WA 6102, Australia; (J.S.); (M.V.)
- Curtin Health Innovation Research Institute (CHIRI), Curtin University, Perth, WA 6102, Australia
| |
Collapse
|
2
|
Long B, Cremat DL, Serpa E, Qian S, Blebea J. Applying Artificial Intelligence to Predict Complications After Endovascular Aneurysm Repair. Vasc Endovascular Surg 2024; 58:65-75. [PMID: 37429299 DOI: 10.1177/15385744231189024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
Objective: Complications after Endovascular Aneurysm Repair (EVAR) can be fatal. Patient follow-up for surveillance imaging is becoming more challenging as fewer patients are seen, particularly after the first year. The aim of this study was to develop an artificial intelligence model to predict the complication probability of individual patients to better identify those needing more intensive post-operative surveillance. Methods: Pre-operative CTA 3D reconstruction images of AAA from 273 patients who underwent EVAR from 2011-2020 were collected. Of these, 48 patients had post-operative complications including endoleak, AAA rupture, graft limb occlusion, renal artery occlusion, and neck dilation. A deep convolutional neural network model (VascAI©) was developed which utilized pre-operative 3D CT images to predict risk of complications after EVAR. The model was built with TensorFlow software and run on the Google Colab Platform. An initial training subset of 40 randomly selected patients with complications and 189 without were used to train the AI model while the remaining 8 positive and 36 negative cases tested its performance and prediction accuracy. Data down-sampling was used to alleviate data imbalance and data augmentation methodology to further boost model performance. Results: Successful training was completed on the 229 cases in the training set and then applied to predict the complication probability of each individual in the held-out performance testing cases. The model provided a complication sensitivity of 100% and identified all the patients who later developed complications after EVAR. Of 36 patients without complications, 16 (44%) were falsely predicted to develop complications. The results therefore demonstrated excellent sensitivity for identifying patients who would benefit from more stringent surveillance and decrease the frequency of surveillance in 56% of patients unlike to develop complications. Conclusion: AI models can be developed to predict the risk of post-operative complications with high accuracy. Compared to existing methods, the model developed in this study did not require any expert-annotated data but only the AAA CTA images as inputs. This model can play an assistive role in identifying patients at high risk for post-EVAR complications and the need for greater compliance in surveillance.
Collapse
Affiliation(s)
- Becky Long
- Department of Surgery, College of Medicine, Central Michigan University, Saginaw, MI, USA
| | - Danielle L Cremat
- Department of Surgery, College of Medicine, Central Michigan University, Saginaw, MI, USA
| | - Eduardo Serpa
- Department of Surgery, College of Medicine, Central Michigan University, Saginaw, MI, USA
| | - Sinong Qian
- Department of Surgery, College of Medicine, Central Michigan University, Saginaw, MI, USA
| | - John Blebea
- Department of Surgery, College of Medicine, Central Michigan University, Saginaw, MI, USA
| |
Collapse
|
3
|
Auto-MyIn: Automatic diagnosis of myocardial infarction via multiple GLCMs, CNNs, and SVMs. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
4
|
Attallah O. MonDiaL-CAD: Monkeypox diagnosis via selected hybrid CNNs unified with feature selection and ensemble learning. Digit Health 2023; 9:20552076231180054. [PMID: 37312961 PMCID: PMC10259124 DOI: 10.1177/20552076231180054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 05/18/2023] [Indexed: 06/15/2023] Open
Abstract
Objective Recently, monkeypox virus is slowly evolving and there are fears it will spread as COVID-19. Computer-aided diagnosis (CAD) based on deep learning approaches especially convolutional neural network (CNN) can assist in the rapid determination of reported incidents. The current CADs were mostly based on an individual CNN. Few CADs employed multiple CNNs but did not investigate which combination of CNNs has a greater impact on the performance. Furthermore, they relied on only spatial information of deep features to train their models. This study aims to construct a CAD tool named "Monkey-CAD" that can address the previous limitations and automatically diagnose monkeypox rapidly and accurately. Methods Monkey-CAD extracts features from eight CNNs and then examines the best possible combination of deep features that influence classification. It employs discrete wavelet transform (DWT) to merge features which diminishes fused features' size and provides a time-frequency demonstration. These deep features' sizes are then further reduced via an entropy-based feature selection approach. These reduced fused features are finally used to deliver a better representation of the input features and feed three ensemble classifiers. Results Two freely accessible datasets called Monkeypox skin image (MSID) and Monkeypox skin lesion (MSLD) are employed in this study. Monkey-CAD could discriminate among cases with and without Monkeypox achieving an accuracy of 97.1% for MSID and 98.7% for MSLD datasets respectively. Conclusions Such promising results demonstrate that the Monkey-CAD can be employed to assist health practitioners. They also verify that fusing deep features from selected CNNs can boost performance.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
5
|
de Santis RB, Gontijo TS, Costa MA. A Data-Driven Framework for Small Hydroelectric Plant Prognosis Using Tsfresh and Machine Learning Survival Models. SENSORS (BASEL, SWITZERLAND) 2022; 23:12. [PMID: 36616612 PMCID: PMC9824278 DOI: 10.3390/s23010012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 12/06/2022] [Accepted: 12/16/2022] [Indexed: 06/17/2023]
Abstract
Maintenance in small hydroelectric plants (SHPs) is essential for securing the expansion of clean energy sources and supplying the energy estimated to be required for the coming years. Identifying failures in SHPs before they happen is crucial for allowing better management of asset maintenance, lowering operating costs, and enabling the expansion of renewable energy sources. Most fault prognosis models proposed thus far for hydroelectric generating units are based on signal decomposition and regression models. In the specific case of SHPs, there is a high occurrence of data being censored, since the operation is not consistently steady and can be repeatedly interrupted due to transmission problems or scarcity of water resources. To overcome this, we propose a two-step, data-driven framework for SHP prognosis based on time series feature engineering and survival modeling. We compared two different strategies for feature engineering: one using higher-order statistics and the other using the Tsfresh algorithm. We adjusted three machine learning survival models-CoxNet, survival random forests, and gradient boosting survival analysis-for estimating the concordance index of these approaches. The best model presented a significant concordance index of 77.44%. We further investigated and discussed the importance of the monitored sensors and the feature extraction aggregations. The kurtosis and variance were the most relevant aggregations in the higher-order statistics domain, while the fast Fourier transform and continuous wavelet transform were the most frequent transformations when using Tsfresh. The most important sensors were related to the temperature at several points, such as the bearing generator, oil hydraulic unit, and turbine radial bushing.
Collapse
Affiliation(s)
- Rodrigo Barbosa de Santis
- Graduate Program in Industrial Engineering, Universidade Federal de Minas Gerais, Av. Antônio Carlos 6627, Belo Horizonte 31270-901, MG, Brazil
| | - Tiago Silveira Gontijo
- Graduate Program in Industrial Engineering, Universidade Federal de Minas Gerais, Av. Antônio Carlos 6627, Belo Horizonte 31270-901, MG, Brazil
| | - Marcelo Azevedo Costa
- Graduate Program in Industrial Engineering, Universidade Federal de Minas Gerais, Av. Antônio Carlos 6627, Belo Horizonte 31270-901, MG, Brazil
- Department of Industrial Engineering, Universidade Federal de Minas Gerais, Av. Antônio Carlos 6627, Belo Horizonte 31270-901, MG, Brazil
| |
Collapse
|
6
|
Attallah O, Aslan MF, Sabanci K. A Framework for Lung and Colon Cancer Diagnosis via Lightweight Deep Learning Models and Transformation Methods. Diagnostics (Basel) 2022; 12:diagnostics12122926. [PMID: 36552933 PMCID: PMC9776637 DOI: 10.3390/diagnostics12122926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 11/19/2022] [Accepted: 11/22/2022] [Indexed: 11/25/2022] Open
Abstract
Among the leading causes of mortality and morbidity in people are lung and colon cancers. They may develop concurrently in organs and negatively impact human life. If cancer is not diagnosed in its early stages, there is a great likelihood that it will spread to the two organs. The histopathological detection of such malignancies is one of the most crucial components of effective treatment. Although the process is lengthy and complex, deep learning (DL) techniques have made it feasible to complete it more quickly and accurately, enabling researchers to study a lot more patients in a short time period and for a lot less cost. Earlier studies relied on DL models that require great computational ability and resources. Most of them depended on individual DL models to extract features of high dimension or to perform diagnoses. However, in this study, a framework based on multiple lightweight DL models is proposed for the early detection of lung and colon cancers. The framework utilizes several transformation methods that perform feature reduction and provide a better representation of the data. In this context, histopathology scans are fed into the ShuffleNet, MobileNet, and SqueezeNet models. The number of deep features acquired from these models is subsequently reduced using principal component analysis (PCA) and fast Walsh-Hadamard transform (FHWT) techniques. Following that, discrete wavelet transform (DWT) is used to fuse the FWHT's reduced features obtained from the three DL models. Additionally, the three DL models' PCA features are concatenated. Finally, the diminished features as a result of PCA and FHWT-DWT reduction and fusion processes are fed to four distinct machine learning algorithms, reaching the highest accuracy of 99.6%. The results obtained using the proposed framework based on lightweight DL models show that it can distinguish lung and colon cancer variants with a lower number of features and less computational complexity compared to existing methods. They also prove that utilizing transformation methods to reduce features can offer a superior interpretation of the data, thus improving the diagnosis procedure.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
- Correspondence:
| | - Muhammet Fatih Aslan
- Department of Electrical and Electronics Engineering, Karamanoglu Mehmetbey University, 70100 Karaman, Turkey
| | - Kadir Sabanci
- Department of Electrical and Electronics Engineering, Karamanoglu Mehmetbey University, 70100 Karaman, Turkey
| |
Collapse
|
7
|
Attallah O, Samir A. A wavelet-based deep learning pipeline for efficient COVID-19 diagnosis via CT slices. Appl Soft Comput 2022; 128:109401. [PMID: 35919069 PMCID: PMC9335861 DOI: 10.1016/j.asoc.2022.109401] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 05/20/2022] [Accepted: 07/25/2022] [Indexed: 12/30/2022]
Abstract
The quick diagnosis of the novel coronavirus (COVID-19) disease is vital to prevent its propagation and improve therapeutic outcomes. Computed tomography (CT) is believed to be an effective tool for diagnosing COVID-19, however, the CT scan contains hundreds of slices that are complex to be analyzed and could cause delays in diagnosis. Artificial intelligence (AI) especially deep learning (DL), could facilitate and speed up COVID-19 diagnosis from such scans. Several studies employed DL approaches based on 2D CT images from a single view, nevertheless, 3D multiview CT slices demonstrated an excellent ability to enhance the efficiency of COVID-19 diagnosis. The majority of DL-based studies utilized the spatial information of the original CT images to train their models, though, using spectral–temporal information could improve the detection of COVID-19. This article proposes a DL-based pipeline called CoviWavNet for the automatic diagnosis of COVID-19. CoviWavNet uses a 3D multiview dataset called OMNIAHCOV. Initially, it analyzes the CT slices using multilevel discrete wavelet decomposition (DWT) and then uses the heatmaps of the approximation levels to train three ResNet CNN models. These ResNets use the spectral–temporal information of such images to perform classification. Subsequently, it investigates whether the combination of spatial information with spectral–temporal information could improve the diagnostic accuracy of COVID-19. For this purpose, it extracts deep spectral–temporal features from such ResNets using transfer learning and integrates them with deep spatial features extracted from the same ResNets trained with the original CT slices. Then, it utilizes a feature selection step to reduce the dimension of such integrated features and use them as inputs to three support vector machine (SVM) classifiers. To further validate the performance of CoviWavNet, a publicly available benchmark dataset called SARS-COV-2-CT-Scan is employed. The results of CoviWavNet have demonstrated that using the spectral–temporal information of the DWT heatmap images to train the ResNets is superior to utilizing the spatial information of the original CT images. Furthermore, integrating deep spectral–temporal features with deep spatial features has enhanced the classification accuracy of the three SVM classifiers reaching a final accuracy of 99.33% and 99.7% for the OMNIAHCOV and SARS-COV-2-CT-Scan datasets respectively. These accuracies verify the outstanding performance of CoviWavNet compared to other related studies. Thus, CoviWavNet can help radiologists in the rapid and accurate diagnosis of COVID-19 diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| | - Ahmed Samir
- Department of Radiodiagnosis, Faculty of Medicine, University of Alexandria, Egypt
| |
Collapse
|
8
|
Caradu C, Pouncey AL, Lakhlifi E, Brunet C, Bérard X, Ducasse E. Fully automatic volume segmentation using deep learning approaches to assess aneurysmal sac evolution after infra-renal endovascular aortic repair. J Vasc Surg 2022; 76:620-630.e3. [PMID: 35618195 DOI: 10.1016/j.jvs.2022.03.891] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 03/29/2022] [Indexed: 11/16/2022]
Abstract
OBJECTIVE Endovascular aortic repair (EVAR) surveillance relies on serial measurements of maximal diameter despite significant inter- and intra-observer variability. Volumetric measurements are more sensitive but general use is hampered by the time required for their implementation. An innovative fully automated software (PRAEVAorta® from Nurea), using artificial intelligence (AI), previously demonstrated fast and robust detection of infra-renal abdominal aortic aneurysm's (AAA) characteristics on pre-operative imaging. This study aimed to assess the robustness of these data on post-EVAR computed tomography (CT) scans. METHODS Comparison was made between fully automatic and semi-automatic segmentation manually corrected by a senior surgeon on a dataset of 48 patients (48 early post-EVAR CT scans with 6466 slices, and a total of 101 follow-up CT scans with 13708 slices). RESULTS The analyses confirmed an excellent correlation of post-EVAR volumes and surfaces, as well as, proximal neck and maximum aneurysm diameters measured with the fully automatic and manually corrected segmentation methods (Pearson's coefficient correlation >.99, p<.0001). Comparison between the fully automatic and manually corrected segmentation method revealed a mean Dice Similarity Coefficient of 0.950±0.015, Jaccard index of 0.906±0.028, Sensitivity of 0.929±0.028, Specificity of 0.965±0.016, Volumetric Similarity (VS) of 0.973±0.018 and mean Hausdorff Distance/slice of 8.7±10.8mm. The mean VS reached 0.873±0.100 for the lumen and 0.903±0.091 for the thrombus. The segmentation time was 9 times faster with the fully automatic method (2.5 vs 22 min/patient with the manually corrected method; p<.0001). Preliminary analysis also demonstrated that a diameter increase of 2mm can actually represent >5% volume increase. CONCLUSION PRAEVAorta® enables a fast, reproducible, and fully automated analysis of post-EVAR AAA sac and neck characteristics, with comparison between different time points. It could become a crucial adjunct for EVAR follow-up through early detection of sac evolution, which may reduce the risk of secondary rupture.
Collapse
Affiliation(s)
- Caroline Caradu
- Bordeaux University Hospital, department of vascular surgery, 33000 Bordeaux, France
| | | | - Emilie Lakhlifi
- Bordeaux University Hospital, department of vascular surgery, 33000 Bordeaux, France
| | - Céline Brunet
- Bordeaux University Hospital, department of vascular surgery, 33000 Bordeaux, France
| | - Xavier Bérard
- Bordeaux University Hospital, department of vascular surgery, 33000 Bordeaux, France
| | - Eric Ducasse
- Bordeaux University Hospital, department of vascular surgery, 33000 Bordeaux, France.
| |
Collapse
|
9
|
Attallah O. An Intelligent ECG-Based Tool for Diagnosing COVID-19 via Ensemble Deep Learning Techniques. BIOSENSORS 2022; 12:bios12050299. [PMID: 35624600 PMCID: PMC9138764 DOI: 10.3390/bios12050299] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 04/06/2022] [Accepted: 04/24/2022] [Indexed: 06/01/2023]
Abstract
Diagnosing COVID-19 accurately and rapidly is vital to control its quick spread, lessen lockdown restrictions, and decrease the workload on healthcare structures. The present tools to detect COVID-19 experience numerous shortcomings. Therefore, novel diagnostic tools are to be examined to enhance diagnostic accuracy and avoid the limitations of these tools. Earlier studies indicated multiple structures of cardiovascular alterations in COVID-19 cases which motivated the realization of using ECG data as a tool for diagnosing the novel coronavirus. This study introduced a novel automated diagnostic tool based on ECG data to diagnose COVID-19. The introduced tool utilizes ten deep learning (DL) models of various architectures. It obtains significant features from the last fully connected layer of each DL model and then combines them. Afterward, the tool presents a hybrid feature selection based on the chi-square test and sequential search to select significant features. Finally, it employs several machine learning classifiers to perform two classification levels. A binary level to differentiate between normal and COVID-19 cases, and a multiclass to discriminate COVID-19 cases from normal and other cardiac complications. The proposed tool reached an accuracy of 98.2% and 91.6% for binary and multiclass levels, respectively. This performance indicates that the ECG could be used as an alternative means of diagnosis of COVID-19.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| |
Collapse
|
10
|
Attallah O. ECG-BiCoNet: An ECG-based pipeline for COVID-19 diagnosis using Bi-Layers of deep features integration. Comput Biol Med 2022; 142:105210. [PMID: 35026574 PMCID: PMC8730786 DOI: 10.1016/j.compbiomed.2022.105210] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 01/01/2022] [Accepted: 01/01/2022] [Indexed: 12/29/2022]
Abstract
The accurate and speedy detection of COVID-19 is essential to avert the fast propagation of the virus, alleviate lockdown constraints and diminish the burden on health organizations. Currently, the methods used to diagnose COVID-19 have several limitations, thus new techniques need to be investigated to improve the diagnosis and overcome these limitations. Taking into consideration the great benefits of electrocardiogram (ECG) applications, this paper proposes a new pipeline called ECG-BiCoNet to investigate the potential of using ECG data for diagnosing COVID-19. ECG-BiCoNet employs five deep learning models of distinct structural design. ECG-BiCoNet extracts two levels of features from two different layers of each deep learning technique. Features mined from higher layers are fused using discrete wavelet transform and then integrated with lower-layers features. Afterward, a feature selection approach is utilized. Finally, an ensemble classification system is built to merge predictions of three machine learning classifiers. ECG-BiCoNet accomplishes two classification categories, binary and multiclass. The results of ECG-BiCoNet present a promising COVID-19 performance with an accuracy of 98.8% and 91.73% for binary and multiclass classification categories. These results verify that ECG data may be used to diagnose COVID-19 which can help clinicians in the automatic diagnosis and overcome limitations of manual diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 1029, Egypt.
| |
Collapse
|
11
|
Attallah O. A computer-aided diagnostic framework for coronavirus diagnosis using texture-based radiomics images. Digit Health 2022; 8:20552076221092543. [PMID: 35433024 PMCID: PMC9005822 DOI: 10.1177/20552076221092543] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 03/21/2022] [Indexed: 12/14/2022] Open
Abstract
The accurate and rapid detection of the novel coronavirus infection, coronavirus is very important to prevent the fast spread of such disease. Thus, reducing negative effects that influenced many industrial sectors, especially healthcare. Artificial intelligence techniques in particular deep learning could help in the fast and precise diagnosis of coronavirus from computed tomography images. Most artificial intelligence-based studies used the original computed tomography images to build their models; however, the integration of texture-based radiomics images and deep learning techniques could improve the diagnostic accuracy of the novel coronavirus diseases. This study proposes a computer-assisted diagnostic framework based on multiple deep learning and texture-based radiomics approaches. It first trains three Residual Networks (ResNets) deep learning techniques with two texture-based radiomics images including discrete wavelet transform and gray-level covariance matrix instead of the original computed tomography images. Then, it fuses the texture-based radiomics deep features sets extracted from each using discrete cosine transform. Thereafter, it further combines the fused texture-based radiomics deep features obtained from the three convolutional neural networks. Finally, three support vector machine classifiers are utilized for the classification procedure. The proposed method is validated experimentally on the benchmark severe respiratory syndrome coronavirus 2 computed tomography image dataset. The accuracies attained indicate that using texture-based radiomics (gray-level covariance matrix, discrete wavelet transform) images for training the ResNet-18 (83.22%, 74.9%), ResNet-50 (80.94%, 78.39%), and ResNet-101 (80.54%, 77.99%) is better than using the original computed tomography images (70.34%, 76.51%, and 73.42%) for ResNet-18, ResNet-50, and ResNet-101, respectively. Furthermore, the sensitivity, specificity, accuracy, precision, and F1-score achieved using the proposed computer-assisted diagnostic after the two fusion steps are 99.47%, 99.72%, 99.60%, 99.72%, and 99.60% which proves that combining texture-based radiomics deep features obtained from the three ResNets has boosted its performance. Thus, fusing multiple texture-based radiomics deep features mined from several convolutional neural networks is better than using only one type of radiomics approach and a single convolutional neural network. The performance of the proposed computer-assisted diagnostic framework allows it to be used by radiologists in attaining fast and accurate diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
12
|
Attallah O. A deep learning-based diagnostic tool for identifying various diseases via facial images. Digit Health 2022; 8:20552076221124432. [PMID: 36105626 PMCID: PMC9465585 DOI: 10.1177/20552076221124432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 08/18/2022] [Indexed: 11/16/2022] Open
Abstract
With the current health crisis caused by the COVID-19 pandemic, patients have
become more anxious about infection, so they prefer not to have direct contact
with doctors or clinicians. Lately, medical scientists have confirmed that
several diseases exhibit corresponding specific features on the face the face.
Recent studies have indicated that computer-aided facial diagnosis can be a
promising tool for the automatic diagnosis and screening of diseases from facial
images. However, few of these studies used deep learning (DL) techniques. Most
of them focused on detecting a single disease, using handcrafted feature
extraction methods and conventional machine learning techniques based on
individual classifiers trained on small and private datasets using images taken
from a controlled environment. This study proposes a novel computer-aided facial
diagnosis system called FaceDisNet that uses a new public dataset based on
images taken from an unconstrained environment and could be employed for
forthcoming comparisons. It detects single and multiple diseases. FaceDisNet is
constructed by integrating several spatial deep features from convolutional
neural networks of various architectures. It does not depend only on spatial
features but also extracts spatial-spectral features. FaceDisNet searches for
the fused spatial-spectral feature set that has the greatest impact on the
classification. It employs two feature selection techniques to reduce the large
dimension of features resulting from feature fusion. Finally, it builds an
ensemble classifier based on stacking to perform classification. The performance
of FaceDisNet verifies its ability to diagnose single and multiple diseases.
FaceDisNet achieved a maximum accuracy of 98.57% and 98% after the ensemble
classification and feature selection steps for binary and multiclass
classification categories. These results prove that FaceDisNet is a reliable
tool and could be employed to avoid the difficulties and complications of manual
diagnosis. Also, it can help physicians achieve accurate diagnoses without the
need for physical contact with the patients.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
13
|
Attallah O. DIAROP: Automated Deep Learning-Based Diagnostic Tool for Retinopathy of Prematurity. Diagnostics (Basel) 2021; 11:2034. [PMID: 34829380 PMCID: PMC8620568 DOI: 10.3390/diagnostics11112034] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 09/24/2021] [Accepted: 11/01/2021] [Indexed: 12/12/2022] Open
Abstract
Retinopathy of Prematurity (ROP) affects preterm neonates and could cause blindness. Deep Learning (DL) can assist ophthalmologists in the diagnosis of ROP. This paper proposes an automated and reliable diagnostic tool based on DL techniques called DIAROP to support the ophthalmologic diagnosis of ROP. It extracts significant features by first obtaining spatial features from the four Convolution Neural Networks (CNNs) DL techniques using transfer learning and then applying Fast Walsh Hadamard Transform (FWHT) to integrate these features. Moreover, DIAROP explores the best-integrated features extracted from the CNNs that influence its diagnostic capability. The results of DIAROP indicate that DIAROP achieved an accuracy of 93.2% and an area under receiving operating characteristic curve (AUC) of 0.98. Furthermore, DIAROP performance is compared with recent ROP diagnostic tools. Its promising performance shows that DIAROP may assist the ophthalmologic diagnosis of ROP.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| |
Collapse
|
14
|
Intelligent Dermatologist Tool for Classifying Multiple Skin Cancer Subtypes by Incorporating Manifold Radiomics Features Categories. CONTRAST MEDIA & MOLECULAR IMAGING 2021; 2021:7192016. [PMID: 34621146 PMCID: PMC8457955 DOI: 10.1155/2021/7192016] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 08/20/2021] [Accepted: 09/01/2021] [Indexed: 02/06/2023]
Abstract
The rates of skin cancer (SC) are rising every year and becoming a critical health issue worldwide. SC's early and accurate diagnosis is the key procedure to reduce these rates and improve survivability. However, the manual diagnosis is exhausting, complicated, expensive, prone to diagnostic error, and highly dependent on the dermatologist's experience and abilities. Thus, there is a vital need to create automated dermatologist tools that are capable of accurately classifying SC subclasses. Recently, artificial intelligence (AI) techniques including machine learning (ML) and deep learning (DL) have verified the success of computer-assisted dermatologist tools in the automatic diagnosis and detection of SC diseases. Previous AI-based dermatologist tools are based on features which are either high-level features based on DL methods or low-level features based on handcrafted operations. Most of them were constructed for binary classification of SC. This study proposes an intelligent dermatologist tool to accurately diagnose multiple skin lesions automatically. This tool incorporates manifold radiomics features categories involving high-level features such as ResNet-50, DenseNet-201, and DarkNet-53 and low-level features including discrete wavelet transform (DWT) and local binary pattern (LBP). The results of the proposed intelligent tool prove that merging manifold features of different categories has a high influence on the classification accuracy. Moreover, these results are superior to those obtained by other related AI-based dermatologist tools. Therefore, the proposed intelligent tool can be used by dermatologists to help them in the accurate diagnosis of the SC subcategory. It can also overcome manual diagnosis limitations, reduce the rates of infection, and enhance survival rates.
Collapse
|
15
|
Attallah O. CoMB-Deep: Composite Deep Learning-Based Pipeline for Classifying Childhood Medulloblastoma and Its Classes. Front Neuroinform 2021; 15:663592. [PMID: 34122031 PMCID: PMC8193683 DOI: 10.3389/fninf.2021.663592] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 04/26/2021] [Indexed: 12/28/2022] Open
Abstract
Childhood medulloblastoma (MB) is a threatening malignant tumor affecting children all over the globe. It is believed to be the foremost common pediatric brain tumor causing death. Early and accurate classification of childhood MB and its classes are of great importance to help doctors choose the suitable treatment and observation plan, avoid tumor progression, and lower death rates. The current gold standard for diagnosing MB is the histopathology of biopsy samples. However, manual analysis of such images is complicated, costly, time-consuming, and highly dependent on the expertise and skills of pathologists, which might cause inaccurate results. This study aims to introduce a reliable computer-assisted pipeline called CoMB-Deep to automatically classify MB and its classes with high accuracy from histopathological images. This key challenge of the study is the lack of childhood MB datasets, especially its four categories (defined by the WHO) and the inadequate related studies. All relevant works were based on either deep learning (DL) or textural analysis feature extractions. Also, such studies employed distinct features to accomplish the classification procedure. Besides, most of them only extracted spatial features. Nevertheless, CoMB-Deep blends the advantages of textural analysis feature extraction techniques and DL approaches. The CoMB-Deep consists of a composite of DL techniques. Initially, it extracts deep spatial features from 10 convolutional neural networks (CNNs). It then performs a feature fusion step using discrete wavelet transform (DWT), a texture analysis method capable of reducing the dimension of fused features. Next, the CoMB-Deep explores the best combination of fused features, enhancing the performance of the classification process using two search strategies. Afterward, it employs two feature selection techniques on the fused feature sets selected in the previous step. A bi-directional long-short term memory (Bi-LSTM) network; a DL-based approach that is utilized for the classification phase. CoMB-Deep maintains two classification categories: binary category for distinguishing between the abnormal and normal cases and multi-class category to identify the subclasses of MB. The results of the CoMB-Deep for both classification categories prove that it is reliable. The results also indicate that the feature sets selected using both search strategies have enhanced the performance of Bi-LSTM compared to individual spatial deep features. CoMB-Deep is compared to related studies to verify its competitiveness, and this comparison confirmed its robustness and outperformance. Hence, CoMB-Deep can help pathologists perform accurate diagnoses, reduce misdiagnosis risks that could occur with manual diagnosis, accelerate the classification procedure, and decrease diagnosis costs.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
16
|
Attallah O, Sharkas M. GASTRO-CADx: a three stages framework for diagnosing gastrointestinal diseases. PeerJ Comput Sci 2021; 7:e423. [PMID: 33817058 PMCID: PMC7959662 DOI: 10.7717/peerj-cs.423] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 02/11/2021] [Indexed: 05/04/2023]
Abstract
Gastrointestinal (GI) diseases are common illnesses that affect the GI tract. Diagnosing these GI diseases is quite expensive, complicated, and challenging. A computer-aided diagnosis (CADx) system based on deep learning (DL) techniques could considerably lower the examination cost processes and increase the speed and quality of diagnosis. Therefore, this article proposes a CADx system called Gastro-CADx to classify several GI diseases using DL techniques. Gastro-CADx involves three progressive stages. Initially, four different CNNs are used as feature extractors to extract spatial features. Most of the related work based on DL approaches extracted spatial features only. However, in the following phase of Gastro-CADx, features extracted in the first stage are applied to the discrete wavelet transform (DWT) and the discrete cosine transform (DCT). DCT and DWT are used to extract temporal-frequency and spatial-frequency features. Additionally, a feature reduction procedure is performed in this stage. Finally, in the third stage of the Gastro-CADx, several combinations of features are fused in a concatenated manner to inspect the effect of feature combination on the output results of the CADx and select the best-fused feature set. Two datasets referred to as Dataset I and II are utilized to evaluate the performance of Gastro-CADx. Results indicated that Gastro-CADx has achieved an accuracy of 97.3% and 99.7% for Dataset I and II respectively. The results were compared with recent related works. The comparison showed that the proposed approach is capable of classifying GI diseases with higher accuracy compared to other work. Thus, it can be used to reduce medical complications, death-rates, in addition to the cost of treatment. It can also help gastroenterologists in producing more accurate diagnosis while lowering inspection time.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| | - Maha Sharkas
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
17
|
Attallah O. MB-AI-His: Histopathological Diagnosis of Pediatric Medulloblastoma and its Subtypes via AI. Diagnostics (Basel) 2021; 11:359. [PMID: 33672752 PMCID: PMC7924641 DOI: 10.3390/diagnostics11020359] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Revised: 02/11/2021] [Accepted: 02/11/2021] [Indexed: 12/17/2022] Open
Abstract
Medulloblastoma (MB) is a dangerous malignant pediatric brain tumor that could lead to death. It is considered the most common pediatric cancerous brain tumor. Precise and timely diagnosis of pediatric MB and its four subtypes (defined by the World Health Organization (WHO)) is essential to decide the appropriate follow-up plan and suitable treatments to prevent its progression and reduce mortality rates. Histopathology is the gold standard modality for the diagnosis of MB and its subtypes, but manual diagnosis via a pathologist is very complicated, needs excessive time, and is subjective to the pathologists' expertise and skills, which may lead to variability in the diagnosis or misdiagnosis. The main purpose of the paper is to propose a time-efficient and reliable computer-aided diagnosis (CADx), namely MB-AI-His, for the automatic diagnosis of pediatric MB and its subtypes from histopathological images. The main challenge in this work is the lack of datasets available for the diagnosis of pediatric MB and its four subtypes and the limited related work. Related studies are based on either textural analysis or deep learning (DL) feature extraction methods. These studies used individual features to perform the classification task. However, MB-AI-His combines the benefits of DL techniques and textural analysis feature extraction methods through a cascaded manner. First, it uses three DL convolutional neural networks (CNNs), including DenseNet-201, MobileNet, and ResNet-50 CNNs to extract spatial DL features. Next, it extracts time-frequency features from the spatial DL features based on the discrete wavelet transform (DWT), which is a textural analysis method. Finally, MB-AI-His fuses the three spatial-time-frequency features generated from the three CNNs and DWT using the discrete cosine transform (DCT) and principal component analysis (PCA) to produce a time-efficient CADx system. MB-AI-His merges the privileges of different CNN architectures. MB-AI-His has a binary classification level for classifying among normal and abnormal MB images, and a multi-classification level to classify among the four subtypes of MB. The results of MB-AI-His show that it is accurate and reliable for both the binary and multi-class classification levels. It is also a time-efficient system as both the PCA and DCT methods have efficiently reduced the training execution time. The performance of MB-AI-His is compared with related CADx systems, and the comparison verified the powerfulness of MB-AI-His and its outperforming results. Therefore, it can support pathologists in the accurate and reliable diagnosis of MB and its subtypes from histopathological images. It can also reduce the time and cost of the diagnosis procedure which will correspondingly lead to lower death rates.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| |
Collapse
|
18
|
Attallah O, Anwar F, Ghanem NM, Ismail MA. Histo-CADx: duo cascaded fusion stages for breast cancer diagnosis from histopathological images. PeerJ Comput Sci 2021; 7:e493. [PMID: 33987459 PMCID: PMC8093954 DOI: 10.7717/peerj-cs.493] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 03/26/2021] [Indexed: 05/06/2023]
Abstract
Breast cancer (BC) is one of the most common types of cancer that affects females worldwide. It may lead to irreversible complications and even death due to late diagnosis and treatment. The pathological analysis is considered the gold standard for BC detection, but it is a challenging task. Automatic diagnosis of BC could reduce death rates, by creating a computer aided diagnosis (CADx) system capable of accurately identifying BC at an early stage and decreasing the time consumed by pathologists during examinations. This paper proposes a novel CADx system named Histo-CADx for the automatic diagnosis of BC. Most related studies were based on individual deep learning methods. Also, studies did not examine the influence of fusing features from multiple CNNs and handcrafted features. In addition, related studies did not investigate the best combination of fused features that influence the performance of the CADx. Therefore, Histo-CADx is based on two stages of fusion. The first fusion stage involves the investigation of the impact of fusing several deep learning (DL) techniques with handcrafted feature extraction methods using the auto-encoder DL method. This stage also examines and searches for a suitable set of fused features that could improve the performance of Histo-CADx. The second fusion stage constructs a multiple classifier system (MCS) for fusing outputs from three classifiers, to further improve the accuracy of the proposed Histo-CADx. The performance of Histo-CADx is evaluated using two public datasets; specifically, the BreakHis and the ICIAR 2018 datasets. The results from the analysis of both datasets verified that the two fusion stages of Histo-CADx successfully improved the accuracy of the CADx compared to CADx constructed with individual features. Furthermore, using the auto-encoder for the fusion process has reduced the computation cost of the system. Moreover, the results after the two fusion stages confirmed that Histo-CADx is reliable and has the capacity of classifying BC more accurately compared to other latest studies. Consequently, it can be used by pathologists to help them in the accurate diagnosis of BC. In addition, it can decrease the time and effort needed by medical experts during the examination.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Alexandria, Egypt
| | - Fatma Anwar
- Computer and Systems Engineering Department, Alexandria University, Alexandria, Egypt
| | - Nagia M. Ghanem
- Computer and Systems Engineering Department, Alexandria University, Alexandria, Egypt
| | - Mohamed A. Ismail
- Computer and Systems Engineering Department, Alexandria University, Alexandria, Egypt
| |
Collapse
|
19
|
Ragab DA, Attallah O. FUSI-CAD: Coronavirus (COVID-19) diagnosis based on the fusion of CNNs and handcrafted features. PeerJ Comput Sci 2020; 6:e306. [PMID: 33816957 PMCID: PMC7924442 DOI: 10.7717/peerj-cs.306] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 09/30/2020] [Indexed: 05/19/2023]
Abstract
The precise and rapid diagnosis of coronavirus (COVID-19) at the very primary stage helps doctors to manage patients in high workload conditions. In addition, it prevents the spread of this pandemic virus. Computer-aided diagnosis (CAD) based on artificial intelligence (AI) techniques can be used to distinguish between COVID-19 and non-COVID-19 from the computed tomography (CT) imaging. Furthermore, the CAD systems are capable of delivering an accurate faster COVID-19 diagnosis, which consequently saves time for the disease control and provides an efficient diagnosis compared to laboratory tests. In this study, a novel CAD system called FUSI-CAD based on AI techniques is proposed. Almost all the methods in the literature are based on individual convolutional neural networks (CNN). Consequently, the FUSI-CAD system is based on the fusion of multiple different CNN architectures with three handcrafted features including statistical features and textural analysis features such as discrete wavelet transform (DWT), and the grey level co-occurrence matrix (GLCM) which were not previously utilized in coronavirus diagnosis. The SARS-CoV-2 CT-scan dataset is used to test the performance of the proposed FUSI-CAD. The results show that the proposed system could accurately differentiate between COVID-19 and non-COVID-19 images, as the accuracy achieved is 99%. Additionally, the system proved to be reliable as well. This is because the sensitivity, specificity, and precision attained to 99%. In addition, the diagnostics odds ratio (DOR) is ≥ 100. Furthermore, the results are compared with recent related studies based on the same dataset. The comparison verifies the competence of the proposed FUSI-CAD over the other related CAD systems. Thus, the novel FUSI-CAD system can be employed in real diagnostic scenarios for achieving accurate testing for COVID-19 and avoiding human misdiagnosis that might exist due to human fatigue. It can also reduce the time and exertion made by the radiologists during the examination process.
Collapse
Affiliation(s)
- Dina A. Ragab
- Electronics and Communications Engineering Department, Arab Academy for Science, Technology, and Maritime Transport (AASTMT), Alexandria, Egypt
| | - Omneya Attallah
- Electronics and Communications Engineering Department, Arab Academy for Science, Technology, and Maritime Transport (AASTMT), Alexandria, Egypt
| |
Collapse
|
20
|
Raffort J, Adam C, Carrier M, Ballaith A, Coscas R, Jean-Baptiste E, Hassen-Khodja R, Chakfé N, Lareyre F. Artificial intelligence in abdominal aortic aneurysm. J Vasc Surg 2020; 72:321-333.e1. [PMID: 32093909 DOI: 10.1016/j.jvs.2019.12.026] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Accepted: 12/07/2019] [Indexed: 12/11/2022]
Abstract
OBJECTIVE Abdominal aortic aneurysm (AAA) is a life-threatening disease, and the only curative treatment relies on open or endovascular repair. The decision to treat relies on the evaluation of the risk of AAA growth and rupture, which can be difficult to assess in practice. Artificial intelligence (AI) has revealed new insights into the management of cardiovascular diseases, but its application in AAA has so far been poorly described. The aim of this review was to summarize the current knowledge on the potential applications of AI in patients with AAA. METHODS A comprehensive literature review was performed. The MEDLINE database was searched according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. The search strategy used a combination of keywords and included studies using AI in patients with AAA published between May 2019 and January 2000. Two authors independently screened titles and abstracts and performed data extraction. The search of published literature identified 34 studies with distinct methodologies, aims, and study designs. RESULTS AI was used in patients with AAA to improve image segmentation and for quantitative analysis and characterization of AAA morphology, geometry, and fluid dynamics. AI allowed computation of large data sets to identify patterns that may be predictive of AAA growth and rupture. Several predictive and prognostic programs were also developed to assess patients' postoperative outcomes, including mortality and complications after endovascular aneurysm repair. CONCLUSIONS AI represents a useful tool in the interpretation and analysis of AAA imaging by enabling automatic quantitative measurements and morphologic characterization. It could be used to help surgeons in preoperative planning. AI-driven data management may lead to the development of computational programs for the prediction of AAA evolution and risk of rupture as well as postoperative outcomes. AI could also be used to better evaluate the indications and types of surgical treatment and to plan the postoperative follow-up. AI represents an attractive tool for decision-making and may facilitate development of personalized therapeutic approaches for patients with AAA.
Collapse
Affiliation(s)
- Juliette Raffort
- Clinical Chemistry Laboratory, University Hospital of Nice, Nice, France; Université Côte d'Azur, CHU, Inserm U1065, C3M, Nice, France
| | - Cédric Adam
- Laboratory of Applied Mathematics and Computer Science (MICS), CentraleSupélec, Université Paris-Saclay, Paris, France
| | - Marion Carrier
- Laboratory of Applied Mathematics and Computer Science (MICS), CentraleSupélec, Université Paris-Saclay, Paris, France
| | - Ali Ballaith
- Department of Vascular Surgery, University Hospital of Nice, Nice, France
| | - Raphael Coscas
- Department of Vascular Surgery, Ambroise Paré University Hospital, Assistance Publique-Hôpitaux de Paris (AP-HP), Boulogne, France; Inserm U1018 Team 5, Versailles-Saint-Quentin et Paris-Saclay Universities, Versailles, France
| | - Elixène Jean-Baptiste
- Université Côte d'Azur, CHU, Inserm U1065, C3M, Nice, France; Department of Vascular Surgery, University Hospital of Nice, Nice, France
| | - Réda Hassen-Khodja
- Université Côte d'Azur, CHU, Inserm U1065, C3M, Nice, France; Department of Vascular Surgery, University Hospital of Nice, Nice, France
| | - Nabil Chakfé
- Department of Vascular Surgery and Kidney Transplantation, University Hospital of Strasbourg, and GEPROVAS, Strasbourg, France
| | - Fabien Lareyre
- Université Côte d'Azur, CHU, Inserm U1065, C3M, Nice, France; Department of Vascular Surgery, University Hospital of Nice, Nice, France.
| |
Collapse
|
21
|
Artificial intelligence, machine learning, vascular surgery, automatic image processing. Implications for clinical practice. ANGIOLOGIA 2020. [DOI: 10.20960/angiologia.00177] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
22
|
Breast Cancer Diagnosis Using an Efficient CAD System Based on Multiple Classifiers. Diagnostics (Basel) 2019; 9:diagnostics9040165. [PMID: 31717809 PMCID: PMC6963468 DOI: 10.3390/diagnostics9040165] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2019] [Revised: 10/21/2019] [Accepted: 10/24/2019] [Indexed: 11/17/2022] Open
Abstract
Breast cancer is one of the major health issues across the world. In this study, a new computer-aided detection (CAD) system is introduced. First, the mammogram images were enhanced to increase the contrast. Second, the pectoral muscle was eliminated and the breast was suppressed from the mammogram. Afterward, some statistical features were extracted. Next, k-nearest neighbor (k-NN) and decision trees classifiers were used to classify the normal and abnormal lesions. Moreover, multiple classifier systems (MCS) was constructed as it usually improves the classification results. The MCS has two structures, cascaded and parallel structures. Finally, two wrapper feature selection (FS) approaches were applied to identify those features, which influence classification accuracy. The two data sets (1) the mammographic image analysis society digital mammogram database (MIAS) and (2) the digital mammography dream challenge were combined together to test the CAD system proposed. The highest accuracy achieved with the proposed CAD system before FS was 99.7% using the Adaboosting of the J48 decision tree classifiers. The highest accuracy after FS was 100%, which was achieved with k-NN classifier. Moreover, the area under the curve (AUC) of the receiver operating characteristic (ROC) curve was equal to 1.0. The results showed that the proposed CAD system was able to accurately classify normal and abnormal lesions in mammogram samples.
Collapse
|
23
|
Attallah O, Sharkas MA, Gadelkarim H. Fetal Brain Abnormality Classification from MRI Images of Different Gestational Age. Brain Sci 2019; 9:brainsci9090231. [PMID: 31547368 PMCID: PMC6770437 DOI: 10.3390/brainsci9090231] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Revised: 08/30/2019] [Accepted: 09/08/2019] [Indexed: 02/07/2023] Open
Abstract
Magnetic resonance imaging (MRI) is a common imaging technique used extensively to study human brain activities. Recently, it has been used for scanning the fetal brain. Amongst 1000 pregnant women, 3 of them have fetuses with brain abnormality. Hence, the primary detection and classification are important. Machine learning techniques have a large potential in aiding the early detection of these abnormalities, which correspondingly could enhance the diagnosis process and follow up plans. Most research focused on the classification of abnormal brains in a primary age has been for newborns and premature infants, with fewer studies focusing on images for fetuses. These studies associated fetal scans to scans after birth for the detection and classification of brain defects early in the neonatal age. This type of brain abnormality is named small for gestational age (SGA). This article proposes a novel framework for the classification of fetal brains at an early age (before the fetus is born). As far as we could know, this is the first study to classify brain abnormalities of fetuses of widespread gestational ages (GAs). The study incorporates several machine learning classifiers, such as diagonal quadratic discriminates analysis (DQDA), K-nearest neighbour (K-NN), random forest, naïve Bayes, and radial basis function (RBF) neural network classifiers. Moreover, several bagging and Adaboosting ensembles models have been constructed using random forest, naïve Bayes, and RBF network classifiers. The performances of these ensembles have been compared with their individual models. Our results show that our novel approach can successfully identify and classify numerous types of defects within MRI images of the fetal brain of various GAs. Using the KNN classifier, we were able to achieve the highest classification accuracy and area under receiving operating characteristics of 95.6% and 99% respectively. In addition, ensemble classifiers improved the results of their respective individual models.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications, College of Engineering and Technology, Arab Academy for Science and Technology and Maritime Transport, Alexandria, P.O. Box 1029, Egypt.
| | - Maha A Sharkas
- Department of Electronics and Communications, College of Engineering and Technology, Arab Academy for Science and Technology and Maritime Transport, Alexandria, P.O. Box 1029, Egypt.
| | - Heba Gadelkarim
- Department of Electronics and Communications, College of Engineering and Technology, Arab Academy for Science and Technology and Maritime Transport, Alexandria, P.O. Box 1029, Egypt.
- Department of Computer and Communication Engineering (SSP), Faculty of Engineering, Alexandria University, Alexandria 21526, Egypt.
| |
Collapse
|