101
|
Aloo R, Mutoh A, Moriyama K, Matsui T, Inuzuka N. Ensemble method using real images, metadata and synthetic images for control of class imbalance in classification. ARTIFICIAL LIFE AND ROBOTICS 2022; 27:796-803. [PMID: 36068817 PMCID: PMC9437415 DOI: 10.1007/s10015-022-00781-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Accepted: 07/05/2022] [Indexed: 11/25/2022]
Abstract
Binary classification and anomaly detection face the problem of class imbalance in data sets. The contribution of this paper is to provide an ensemble model that improves image binary classification by reducing the class imbalance between the minority and majority classes in a data set. The ensemble model is a classifier of real images, synthetic images, and metadata associated with the real images. First, we apply a generative model to synthesize images of the minority class from the real image data set. Secondly, we train the ensemble model jointly with synthesized images of the minority class, real images, and metadata. Finally, we evaluate the model performance using a sensitivity metric to observe the difference in classification resulting from the adjustment of class imbalance. Improving the imbalance of the minority class by adding half the size of the majority class we observe an improvement in the classifier’s sensitivity by 12% and 24% for the benchmark pre-trained models of RESNET50 and DENSENet121 respectively.
Collapse
Affiliation(s)
- Rogers Aloo
- Nagoya Institute of Technology, Nagoya, Japan
| | | | | | | | | |
Collapse
|
102
|
Kong J, He Y, Zhu X, Shao P, Xu Y, Chen Y, Coatrieux JL, Yang G. BKC-Net: Bi-Knowledge Contrastive Learning for renal tumor diagnosis on 3D CT images. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
103
|
Zhou Y, Yen GG, Yi Z. Evolutionary Shallowing Deep Neural Networks at Block Levels. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4635-4647. [PMID: 33635798 DOI: 10.1109/tnnls.2021.3059529] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Neural networks have been demonstrated to be trainable even with hundreds of layers, which exhibit remarkable improvement on expressive power and provide significant performance gains in a variety of tasks. However, the prohibitive computational cost has become a severe challenge for deploying them on resource-constrained platforms. Meanwhile, widely adopted deep neural network architectures, for example, ResNets or DenseNets, are manually crafted on benchmark datasets, which hamper their generalization ability to other domains. To cope with these issues, we propose an evolutionary algorithm-based method for shallowing deep neural networks (DNNs) at block levels, which is termed as ESNB. Different from existing studies, ESNB utilizes the ensemble view of block-wise DNNs and employs the multiobjective optimization paradigm to reduce the number of blocks while avoiding performance degradation. It automatically discovers shallower network architectures by pruning less informative blocks, and employs knowledge distillation to recover the performance. Moreover, a novel prior knowledge incorporation strategy is proposed to improve the exploration ability of the evolutionary search process, and a correctness-aware knowledge distillation strategy is designed for better knowledge transferring. Experimental results show that the proposed method can effectively accelerate the inference of DNNs while achieving superior performance when compared with the state-of-the-art competing methods.
Collapse
|
104
|
Küstner T, Vogel J, Hepp T, Forschner A, Pfannenberg C, Schmidt H, Schwenzer NF, Nikolaou K, la Fougère C, Seith F. Development of a Hybrid-Imaging-Based Prognostic Index for Metastasized-Melanoma Patients in Whole-Body 18F-FDG PET/CT and PET/MRI Data. Diagnostics (Basel) 2022; 12:2102. [PMID: 36140504 PMCID: PMC9498091 DOI: 10.3390/diagnostics12092102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 08/19/2022] [Accepted: 08/25/2022] [Indexed: 11/17/2022] Open
Abstract
Besides tremendous treatment success in advanced melanoma patients, the rapid development of oncologic treatment options comes with increasingly high costs and can cause severe life-threatening side effects. For this purpose, predictive baseline biomarkers are becoming increasingly important for risk stratification and personalized treatment planning. Thus, the aim of this pilot study was the development of a prognostic tool for the risk stratification of the treatment response and mortality based on PET/MRI and PET/CT, including a convolutional neural network (CNN) for metastasized-melanoma patients before systemic-treatment initiation. The evaluation was based on 37 patients (19 f, 62 ± 13 y/o) with unresectable metastasized melanomas who underwent whole-body 18F-FDG PET/MRI and PET/CT scans on the same day before the initiation of therapy with checkpoint inhibitors and/or BRAF/MEK inhibitors. The overall survival (OS), therapy response, metastatically involved organs, number of lesions, total lesion glycolysis, total metabolic tumor volume (TMTV), peak standardized uptake value (SULpeak), diameter (Dmlesion) and mean apparent diffusion coefficient (ADCmean) were assessed. For each marker, a Kaplan−Meier analysis and the statistical significance (Wilcoxon test, paired t-test and Bonferroni correction) were assessed. Patients were divided into high- and low-risk groups depending on the OS and treatment response. The CNN segmentation and prediction utilized multimodality imaging data for a complementary in-depth risk analysis per patient. The following parameters correlated with longer OS: a TMTV < 50 mL; no metastases in the brain, bone, liver, spleen or pleura; ≤4 affected organ regions; no metastases; a Dmlesion > 37 mm or SULpeak < 1.3; a range of the ADCmean < 600 mm2/s. However, none of the parameters correlated significantly with the stratification of the patients into the high- or low-risk groups. For the CNN, the sensitivity, specificity, PPV and accuracy were 92%, 96%, 92% and 95%, respectively. Imaging biomarkers such as the metastatic involvement of specific organs, a high tumor burden, the presence of at least one large lesion or a high range of intermetastatic diffusivity were negative predictors for the OS, but the identification of high-risk patients was not feasible with the handcrafted parameters. In contrast, the proposed CNN supplied risk stratification with high specificity and sensitivity.
Collapse
Affiliation(s)
- Thomas Küstner
- MIDAS.Lab, Department of Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| | - Jonas Vogel
- Nuclear Medicine and Clinical Molecular Imaging, Department of Radiology, University Hospital Tübingen, 72076 Tubingen, Germany
| | - Tobias Hepp
- MIDAS.Lab, Department of Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| | - Andrea Forschner
- Department of Dermatology, University Hospital of Tübingen, 72070 Tubingen, Germany
| | - Christina Pfannenberg
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| | - Holger Schmidt
- Faculty of Medicine, Eberhard-Karls-University Tübingen, 72076 Tubingen, Germany
- Siemens Healthineers, 91052 Erlangen, Germany
| | - Nina F. Schwenzer
- Faculty of Medicine, Eberhard-Karls-University Tübingen, 72076 Tubingen, Germany
| | - Konstantin Nikolaou
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
- Cluster of Excellence iFIT (EXC 2180) Image-Guided and Functionally Instructed Tumor Therapies, Eberhard Karls University, 72076 Tubingen, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Tübingen, 72076 Tubingen, Germany
| | - Christian la Fougère
- Nuclear Medicine and Clinical Molecular Imaging, Department of Radiology, University Hospital Tübingen, 72076 Tubingen, Germany
- Cluster of Excellence iFIT (EXC 2180) Image-Guided and Functionally Instructed Tumor Therapies, Eberhard Karls University, 72076 Tubingen, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Tübingen, 72076 Tubingen, Germany
| | - Ferdinand Seith
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| |
Collapse
|
105
|
Yilmaz A, Gencoglan G, Varol R, Demircali AA, Keshavarz M, Uvet H. MobileSkin: Classification of Skin Lesion Images Acquired Using Mobile Phone-Attached Hand-Held Dermoscopes. J Clin Med 2022; 11:5102. [PMID: 36079042 PMCID: PMC9457478 DOI: 10.3390/jcm11175102] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 08/17/2022] [Accepted: 08/26/2022] [Indexed: 11/16/2022] Open
Abstract
Dermoscopy is the visual examination of the skin under a polarized or non-polarized light source. By using dermoscopic equipment, many lesion patterns that are invisible under visible light can be clearly distinguished. Thus, more accurate decisions can be made regarding the treatment of skin lesions. The use of images collected from a dermoscope has both increased the performance of human examiners and allowed the development of deep learning models. The availability of large-scale dermoscopic datasets has allowed the development of deep learning models that can classify skin lesions with high accuracy. However, most dermoscopic datasets contain images that were collected from digital dermoscopic devices, as these devices are frequently used for clinical examination. However, dermatologists also often use non-digital hand-held (optomechanical) dermoscopes. This study presents a dataset consisting of dermoscopic images taken using a mobile phone-attached hand-held dermoscope. Four deep learning models based on the MobileNetV1, MobileNetV2, NASNetMobile, and Xception architectures have been developed to classify eight different lesion types using this dataset. The number of images in the dataset was increased with different data augmentation methods. The models were initialized with weights that were pre-trained on the ImageNet dataset, and then they were further fine-tuned using the presented dataset. The most successful models on the unseen test data, MobileNetV2 and Xception, had performances of 89.18% and 89.64%. The results were evaluated with the 5-fold cross-validation method and compared. Our method allows for automated examination of dermoscopic images taken with mobile phone-attached hand-held dermoscopes.
Collapse
Affiliation(s)
- Abdurrahim Yilmaz
- Mechatronics Engineering, Yildiz Technical University, 34349 Istanbul, Turkey
- Department of Business Administration, Bundeswehr University Munich, 85579 Munich, Germany
| | - Gulsum Gencoglan
- Department of Dermatology, Liv Hospital Vadistanbul, Istinye University, 34396 Istanbul, Turkey
| | - Rahmetullah Varol
- Mechatronics Engineering, Yildiz Technical University, 34349 Istanbul, Turkey
- Department of Business Administration, Bundeswehr University Munich, 85579 Munich, Germany
| | - Ali Anil Demircali
- Department of Metabolism, Digestion and Reproduction, The Hamlyn Centre, Imperial College London, Bessemer Building, London SW7 2AZ, UK
| | - Meysam Keshavarz
- Department of Electrical and Electronic Engineering, The Hamlyn Centre, Imperial College London, Bessemer Building, London SW7 2AZ, UK
| | - Huseyin Uvet
- Mechatronics Engineering, Yildiz Technical University, 34349 Istanbul, Turkey
| |
Collapse
|
106
|
Deep Learning in Dermatology: A Systematic Review of Current Approaches, Outcomes, and Limitations. JID INNOVATIONS 2022; 3:100150. [PMID: 36655135 PMCID: PMC9841357 DOI: 10.1016/j.xjidi.2022.100150] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 06/17/2022] [Accepted: 07/15/2022] [Indexed: 01/21/2023] Open
Abstract
Artificial intelligence (AI) has recently made great advances in image classification and malignancy prediction in the field of dermatology. However, understanding the applicability of AI in clinical dermatology practice remains challenging owing to the variability of models, image data, database characteristics, and variable outcome metrics. This systematic review aims to provide a comprehensive overview of dermatology literature using convolutional neural networks. Furthermore, the review summarizes the current landscape of image datasets, transfer learning approaches, challenges, and limitations within current AI literature and current regulatory pathways for approval of models as clinical decision support tools.
Collapse
|
107
|
Ghosh P, Azam S, Quadir R, Karim A, Shamrat FMJM, Bhowmik SK, Jonkman M, Hasib KM, Ahmed K. SkinNet-16: A deep learning approach to identify benign and malignant skin lesions. Front Oncol 2022; 12:931141. [PMID: 36003775 PMCID: PMC9395205 DOI: 10.3389/fonc.2022.931141] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 07/07/2022] [Indexed: 12/24/2022] Open
Abstract
Skin cancer these days have become quite a common occurrence especially in certain geographic areas such as Oceania. Early detection of such cancer with high accuracy is of utmost importance, and studies have shown that deep learning- based intelligent approaches to address this concern have been fruitful. In this research, we present a novel deep learning- based classifier that has shown promise in classifying this type of cancer on a relevant preprocessed dataset having important features pre-identified through an effective feature extraction method. Skin cancer in modern times has become one of the most ubiquitous types of cancer. Accurate identification of cancerous skin lesions is of vital importance in treating this malady. In this research, we employed a deep learning approach to identify benign and malignant skin lesions. The initial dataset was obtained from Kaggle before several preprocessing steps for hair and background removal, image enhancement, selection of the region of interest (ROI), region-based segmentation, morphological gradient, and feature extraction were performed, resulting in histopathological images data with 20 input features based on geometrical and textural features. A principle component analysis (PCA)-based feature extraction technique was put into action to reduce the dimensionality to 10 input features. Subsequently, we applied our deep learning classifier, SkinNet-16, to detect the cancerous lesion accurately at a very early stage. The highest accuracy was obtained with the Adamax optimizer with a learning rate of 0.006 from the neural network-based model developed in this study. The model also delivered an impressive accuracy of approximately 99.19%.
Collapse
Affiliation(s)
- Pronab Ghosh
- Department of Computer Science (CS), Lakehead University, Thunder Bay, ON, Canada
| | - Sami Azam
- College of Engineering, IT and Environment, Charles Darwin University, Darwin, NT, Australia
- *Correspondence: Sami Azam,
| | - Ryana Quadir
- Department of Software Engineering, Daffodil International University, Dhaka, Bangladesh
| | - Asif Karim
- College of Engineering, IT and Environment, Charles Darwin University, Darwin, NT, Australia
| | - F. M. Javed Mehedi Shamrat
- Department of Computer Science and Engineering, Ahsanullah University of Science & Technology, Dhaka, Bangladesh
| | - Shohag Kumar Bhowmik
- Department of Software Engineering, Daffodil International University, Dhaka, Bangladesh
| | - Mirjam Jonkman
- College of Engineering, IT and Environment, Charles Darwin University, Darwin, NT, Australia
| | - Khan Md. Hasib
- Department of Computer Science and Engineering, Ahsanullah University of Science & Technology, Dhaka, Bangladesh
| | - Kawsar Ahmed
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK, Canada
| |
Collapse
|
108
|
A shallow deep learning approach to classify skin cancer using down-scaling method to minimize time and space complexity. PLoS One 2022; 17:e0269826. [PMID: 35925956 PMCID: PMC9352099 DOI: 10.1371/journal.pone.0269826] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 05/30/2022] [Indexed: 11/19/2022] Open
Abstract
The complex feature characteristics and low contrast of cancer lesions, a high degree of inter-class resemblance between malignant and benign lesions, and the presence of various artifacts including hairs make automated melanoma recognition in dermoscopy images quite challenging. To date, various computer-aided solutions have been proposed to identify and classify skin cancer. In this paper, a deep learning model with a shallow architecture is proposed to classify the lesions into benign and malignant. To achieve effective training while limiting overfitting problems due to limited training data, image preprocessing and data augmentation processes are introduced. After this, the ‘box blur’ down-scaling method is employed, which adds efficiency to our study by reducing the overall training time and space complexity significantly. Our proposed shallow convolutional neural network (SCNN_12) model is trained and evaluated on the Kaggle skin cancer data ISIC archive which was augmented to 16485 images by implementing different augmentation techniques. The model was able to achieve an accuracy of 98.87% with optimizer Adam and a learning rate of 0.001. In this regard, parameter and hyper-parameters of the model are determined by performing ablation studies. To assert no occurrence of overfitting, experiments are carried out exploring k-fold cross-validation and different dataset split ratios. Furthermore, to affirm the robustness the model is evaluated on noisy data to examine the performance when the image quality gets corrupted.This research corroborates that effective training for medical image analysis, addressing training time and space complexity, is possible even with a lightweighted network using a limited amount of training data.
Collapse
|
109
|
Shan P, Fu C, Dai L, Jia T, Tie M, Liu J. Automatic skin lesion classification using a new densely connected convolutional network with an SF module. Med Biol Eng Comput 2022; 60:2173-2188. [DOI: 10.1007/s11517-022-02583-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 04/22/2022] [Indexed: 11/27/2022]
|
110
|
Khan MA, Sharif MI, Raza M, Anjum A, Saba T, Shad SA. Skin lesion segmentation and classification: A unified framework of deep neural network features fusion and selection. EXPERT SYSTEMS 2022; 39. [DOI: 10.1111/exsy.12497] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Accepted: 10/25/2019] [Indexed: 08/25/2024]
Abstract
AbstractAutomated skin lesion diagnosis from dermoscopic images is a difficult process due to several notable problems such as artefacts (hairs), irregularity, lesion shape, and irrelevant features extraction. These problems make the segmentation and classification process difficult. In this research, we proposed an optimized colour feature (OCF) of lesion segmentation and deep convolutional neural network (DCNN)‐based skin lesion classification. A hybrid technique is proposed to remove the artefacts and improve the lesion contrast. Then, colour segmentation technique is presented known as OCFs. The OCF approach is further improved by an existing saliency approach, which is fused by a novel pixel‐based method. A DCNN‐9 model is implemented to extract deep features and fused with OCFs by a novel parallel fusion approach. After this, a normal distribution‐based high‐ranking feature selection technique is utilized to select the most robust features for classification. The suggested method is evaluated on ISBI series (2016, 2017, and 2018) datasets. The experiments are performed in two steps and achieved average segmentation accuracy of more than 90% on selected datasets. Moreover, the achieve classification accuracy of 92.1%, 96.5%, and 85.1%, respectively, on all three datasets shows that the presented method has remarkable performance.
Collapse
Affiliation(s)
| | - Muhammad Imran Sharif
- Department of Computer Science COMSATS University Islamabad, Wah Campus Islamabad Pakistan
| | - Mudassar Raza
- Department of Computer Science COMSATS University Islamabad, Wah Campus Islamabad Pakistan
| | - Almas Anjum
- Department of Computer Science University of Wah Pakistan
| | - Tanzila Saba
- College of Computer and Information Sciences Prince Sultan University Riyadh Saudi Arabia
| | | |
Collapse
|
111
|
Rasheed A, Umar AI, Shirazi SH, Khan Z, Nawaz S, Shahzad M. Automatic eczema classification in clinical images based on hybrid deep neural network. Comput Biol Med 2022; 147:105807. [PMID: 35809409 DOI: 10.1016/j.compbiomed.2022.105807] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 05/09/2022] [Accepted: 05/13/2022] [Indexed: 11/24/2022]
Abstract
The healthcare sector is the highest priority sector, and people demand the highest services and care. The fast rise of deep learning, particularly in clinical decision support tools, has provided exciting solutions primarily in medical imaging. In the past, ANNs (artificial neural networks) have been used extensively in dermatology and have shown promising results for detecting various skin diseases. Eczema represents a group of skin conditions characterized by irritated, dry, inflamed, and itchy skin. This study extends great help to automate the diagnosis process of various kinds of eczema through a Hybrid model that uses concatenated ReliefF optimized handcrafted and deep activated features and a support vector machine for classification. Deep learning models and standard image processing techniques have been used to classify eczema from images automatically. This work contributes to the first multiclass image dataset, namely EIR (Eczema image resource). The EIR dataset consists of 2039 labeled eczema images belonging to seven categories. We performed a comparative analysis of multiple ensemble models, attention mechanisms, and data augmentation techniques for this task. The respective accuracy, sensitivity, and specificity, for eczema classification by classifiers were recorded. In comparison, the proposed Hybrid 6 network achieved the highest accuracy of 88.29%, sensitivity of 85.19%, and specificity of 90.33%% among all employed models. Our findings suggest that deep learning models can classify eczema with high accuracy, and their performance is comparable to dermatologists. However, many factors have been elucidated that contribute to reducing accuracy and potential scope for improvement.
Collapse
Affiliation(s)
- Assad Rasheed
- Department of Information Technology, Hazara University Mansehra, Pakistan.
| | - Arif Iqbal Umar
- Department of Information Technology, Hazara University Mansehra, Pakistan
| | - Syed Hamad Shirazi
- Department of Information Technology, Hazara University Mansehra, Pakistan.
| | - Zakir Khan
- Department of Information Technology, Hazara University Mansehra, Pakistan
| | - Shah Nawaz
- Department of Information Technology, Hazara University Mansehra, Pakistan
| | - Muhammad Shahzad
- Department of Information Technology, Hazara University Mansehra, Pakistan
| |
Collapse
|
112
|
Wang Y, Feng Y, Zhang L, Zhou JT, Liu Y, Goh RSM, Zhen L. Adversarial multimodal fusion with attention mechanism for skin lesion classification using clinical and dermoscopic images. Med Image Anal 2022; 81:102535. [PMID: 35872361 DOI: 10.1016/j.media.2022.102535] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 07/07/2022] [Accepted: 07/11/2022] [Indexed: 10/17/2022]
Abstract
Accurate skin lesion diagnosis requires a great effort from experts to identify the characteristics from clinical and dermoscopic images. Deep multimodal learning-based methods can reduce intra- and inter-reader variability and improve diagnostic accuracy compared to the single modality-based methods. This study develops a novel method, named adversarial multimodal fusion with attention mechanism (AMFAM), to perform multimodal skin lesion classification. Specifically, we adopt a discriminator that uses adversarial learning to enforce the feature extractor to learn the correlated information explicitly. Moreover, we design an attention-based reconstruction strategy to encourage the feature extractor to concentrate on learning the features of the lesion area, thus, enhancing the feature vector from each modality with more discriminative information. Unlike existing multimodal-based approaches, which only focus on learning complementary features from dermoscopic and clinical images, our method considers both correlated and complementary information of the two modalities for multimodal fusion. To verify the effectiveness of our method, we conduct comprehensive experiments on a publicly available multimodal and multi-task skin lesion classification dataset: 7-point criteria evaluation database. The experimental results demonstrate that our proposed method outperforms the current state-of-the-art methods and improves the average AUC score by above 2% on the test set.
Collapse
Affiliation(s)
- Yan Wang
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore
| | - Yangqin Feng
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, P.R.China
| | - Joey Tianyi Zhou
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore
| | - Yong Liu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore
| | - Rick Siow Mong Goh
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore
| | - Liangli Zhen
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore.
| |
Collapse
|
113
|
Medical Image Classification Using Transfer Learning and Chaos Game Optimization on the Internet of Medical Things. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9112634. [PMID: 35875781 PMCID: PMC9300353 DOI: 10.1155/2022/9112634] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 06/07/2022] [Accepted: 06/21/2022] [Indexed: 12/23/2022]
Abstract
The Internet of Medical Things (IoMT) has dramatically benefited medical professionals that patients and physicians can access from all regions. Although the automatic detection and prediction of diseases such as melanoma and leukemia is still being investigated and studied in IoMT, existing approaches are not able to achieve a high degree of efficiency. Thus, with a new approach that provides better results, patients would access the adequate treatments earlier and the death rate would be reduced. Therefore, this paper introduces an IoMT proposal for medical images' classification that may be used anywhere, i.e., it is an ubiquitous approach. It was designed in two stages: first, we employ a transfer learning (TL)-based method for feature extraction, which is carried out using MobileNetV3; second, we use the chaos game optimization (CGO) for feature selection, with the aim of excluding unnecessary features and improving the performance, which is key in IoMT. Our methodology was evaluated using ISIC-2016, PH2, and Blood-Cell datasets. The experimental results indicated that the proposed approach obtained an accuracy of 88.39% on ISIC-2016, 97.52% on PH2, and 88.79% on Blood-cell datsets. Moreover, our approach had successful performances for the metrics employed compared to other existing methods.
Collapse
|
114
|
|
115
|
Semi-Supervised Medical Image Classification Based on Attention and Intrinsic Features of Samples. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136726] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The training of deep neural networks usually requires a lot of high-quality data with good annotations to obtain good performance. However, in clinical medicine, obtaining high-quality marker data is laborious and expensive because it requires the professional skill of clinicians. In this paper, based on the consistency strategy, we propose a new semi-supervised model for medical image classification which introduces a self-attention mechanism into the backbone network to learn more meaningful features in image classification tasks and uses the improved version of focal loss at the supervision loss to reduce the misclassification of samples. Finally, we add a consistency loss similar to the unsupervised consistency loss to encourage the model to learn more about the internal features of unlabeled samples. Our method achieved 94.02% AUC and 72.03% Sensitivity on the ISIC 2018 dataset and 79.74% AUC on the ChestX-ray14 dataset. These results show the effectiveness of our method in single-label and multi-label classification.
Collapse
|
116
|
Gajera HK, Nayak DR, Zaveri MA. Fusion of Local and Global Feature Representation With Sparse Autoencoder for Improved Melanoma Classification. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:5051-5054. [PMID: 36085953 DOI: 10.1109/embc48229.2022.9871370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Automated skin cancer diagnosis is challenging due to inter-class uniformity, intra-class variation, and the complex structure of dermoscopy images. Convolutional neural networks (CNN) have recently made considerable progress in melanoma classification, even in the presence of limited skin images. One of the drawbacks of these methods is the loss of image details caused by downsampling high-resolution skin images to a low resolution. Further, most approaches extract features only from the whole skin image. This paper proposes an ensemble feature fusion and sparse autoencoder (SAE) based framework to overcome the above issues and improve melanoma classification performance. The proposed method extracts features from two streams, local and global, using a pre-trained CNN model. The local stream extracts features from image patches, while the global stream derives features from the whole skin image, preserving both local and global representation. The features are then fused, and an SAE framework is subsequently designed to enrich the feature representation further. The proposed method is validated on ISIC 2016 dataset and the experimental results indicate the superiority of the proposed approach.
Collapse
|
117
|
Venugopal V, Joseph J, Vipin Das M, Kumar Nath M. An EfficientNet-based modified sigmoid transform for enhancing dermatological macro-images of melanoma and nevi skin lesions. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 222:106935. [PMID: 35724474 DOI: 10.1016/j.cmpb.2022.106935] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 04/28/2022] [Accepted: 06/03/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE During the initial stages, skin lesions may not have sufficient intensity difference or contrast from the background region on dermatological macro-images. The lack of proper light exposure at the time of capturing the image also reduces the contrast. Low contrast between lesion and background regions adversely impacts segmentation. Enhancement techniques for improving the contrast between lesion and background skin on dermatological macro-images are limited in the literature. An EfficientNet-based modified sigmoid transform for enhancing the contrast on dermatological macro-images is proposed to address this issue. METHODS A modified sigmoid transform is applied in the HSV color space. The crossover point in the modified sigmoid transform that divides the macro-image into lesion and background is predicted using a modified EfficientNet regressor to exclude manual intervention and subjectivity. The Modified EfficientNet regressor is constructed by replacing the classifier layer in the conventional EfficientNet with a regression layer. Transfer learning is employed to reduce the training time and size of the dataset required to train the modified EfficientNet regressor. For training the modified EfficientNet regressor, a set of value components extracted from the HSV color space representation of the macro-images in the training dataset is fed as input. The corresponding set of ideal crossover points at which the values of Dice similarity coefficient (DSC) between the ground-truth images and the segmented output images obtained from Otsu's thresholding are maximum, is defined as the target. RESULTS On images enhanced with the proposed framework, the DSC of segmented results obtained by Otsu's thresholding increased from 0.68 ± 0.34 to 0.81 ± 0.17. CONCLUSIONS The proposed algorithm could consistently improve the contrast between lesion and background on a comprehensive set of test images, justifying its applications in automated analysis of dermatological macro-images.
Collapse
Affiliation(s)
- Vipin Venugopal
- Department of Electronics and Communication Engineering, National Institute of Technology Puducherry, Karaikal, Puducherry 609609, India.
| | - Justin Joseph
- School of Bioengineering, VIT Bhopal University, Sehore, Madhya Pradesh 466114, India.
| | - M Vipin Das
- Department of Dermatology, Kerala Health Services, Trivandrum, Kerala 695035, India.
| | - Malaya Kumar Nath
- Department of Electronics and Communication Engineering, National Institute of Technology Puducherry, Karaikal, Puducherry 609609, India.
| |
Collapse
|
118
|
Li Z, Wang H, Han Q, Liu J, Hou M, Chen G, Tian Y, Weng T. Convolutional Neural Network with Multiscale Fusion and Attention Mechanism for Skin Diseases Assisted Diagnosis. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8390997. [PMID: 35747726 PMCID: PMC9213118 DOI: 10.1155/2022/8390997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 05/17/2022] [Indexed: 11/17/2022]
Abstract
Melanoma segmentation based on a convolutional neural network (CNN) has recently attracted extensive attention. However, the features captured by CNN are always local that result in discontinuous feature extraction. To solve this problem, we propose a novel multiscale feature fusion network (MSFA-Net). MSFA-Net can extract feature information at different scales through a multiscale feature fusion structure (MSF) in the network and then calibrate and restore the extracted information to achieve the purpose of melanoma segmentation. Specifically, based on the popular encoder-decoder structure, we designed three functional modules, namely MSF, asymmetric skip connection structure (ASCS), and calibration decoder (Decoder). In addition, a weighted cross-entropy loss and two-stage learning rate optimization strategy are designed to train the network more effectively. Compared qualitatively and quantitatively with the representative neural network methods with encoder-decoder structure, such as U-Net, the proposed method can achieve advanced performance.
Collapse
Affiliation(s)
- Zhong Li
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Hongyi Wang
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Qi Han
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Jingcheng Liu
- Liquor Making Microbial Application & Detection Technology of Luzhou Key Laboratory, Luzhou Vocational & Technical College, Luzhou, Sichuan 646000, China
| | - Mingyang Hou
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Guorong Chen
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Yuan Tian
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Tengfei Weng
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| |
Collapse
|
119
|
Skin Lesion Segmentation Based on Vision Transformers and Convolutional Neural Networks—A Comparative Study. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12125990] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
Melanoma skin cancer is considered as one of the most common diseases in the world. Detecting such diseases at early stage is important to saving lives. During medical examinations, it is not an easy task to visually inspect such lesions, as there are similarities between lesions. Technological advances in the form of deep learning methods have been used for diagnosing skin lesions. Over the last decade, deep learning, especially CNN (convolutional neural networks), has been found one of the promising methods to achieve state-of-art results in a variety of medical imaging applications. However, ConvNets’ capabilities are considered limited due to the lack of understanding of long-range spatial relations in images. The recently proposed Vision Transformer (ViT) for image classification employs a purely self-attention-based model that learns long-range spatial relations to focus on the image’s relevant parts. To achieve better performance, existing transformer-based network architectures require large-scale datasets. However, because medical imaging datasets are small, applying pure transformers to medical image analysis is difficult. ViT emphasizes the low-resolution features, claiming that the successive downsampling results in a lack of detailed localization information, rendering it unsuitable for skin lesion image classification. To improve the recovery of detailed localization information, several ViT-based image segmentation methods have recently been combined with ConvNets in the natural image domain. This study provides a comprehensive comparative study of U-Net and attention-based methods for skin lesion image segmentation, which will assist in the diagnosis of skin lesions. The results show that the hybrid TransUNet, with an accuracy of 92.11% and dice coefficient of 89.84%, outperforms other benchmarking methods.
Collapse
|
120
|
Abstract
In the field of medical imaging, the division of an image into meaningful structures using image segmentation is an essential step for pre-processing analysis. Many studies have been carried out to solve the general problem of the evaluation of image segmentation results. One of the main focuses in the computer vision field is based on artificial intelligence algorithms for segmentation and classification, including machine learning and deep learning approaches. The main drawback of supervised segmentation approaches is that a large dataset of ground truth validated by medical experts is required. In this sense, many research groups have developed their segmentation approaches according to their specific needs. However, a generalised application aimed at visualizing, assessing and comparing the results of different methods facilitating the generation of a ground-truth repository is not found in recent literature. In this paper, a new graphical user interface application (MedicalSeg) for the management of medical imaging based on pre-processing and segmentation is presented. The objective is twofold, first to create a test platform for comparing segmentation approaches, and secondly to generate segmented images to create ground truths that can then be used for future purposes as artificial intelligence tools. An experimental demonstration and performance analysis discussion are presented in this paper.
Collapse
|
121
|
Abstract
Melanoma is a fatal type of skin cancer; the fury spread results in a high fatality rate when the malignancy is not treated at an initial stage. The patients’ lives can be saved by accurately detecting skin cancer at an initial stage. A quick and precise diagnosis might help increase the patient’s survival rate. It necessitates the development of a computer-assisted diagnostic support system. This research proposes a novel deep transfer learning model for melanoma classification using MobileNetV2. The MobileNetV2 is a deep convolutional neural network that classifies the sample skin lesions as malignant or benign. The performance of the proposed deep learning model is evaluated using the ISIC 2020 dataset. The dataset contains less than 2% malignant samples, raising the class imbalance. Various data augmentation techniques were applied to tackle the class imbalance issue and add diversity to the dataset. The experimental results demonstrate that the proposed deep learning technique outperforms state-of-the-art deep learning techniques in terms of accuracy and computational cost.
Collapse
|
122
|
Bian X, Pan H, Zhang K, Chen C, Liu P, Shi K. NeDSeM: Neutrosophy Domain-Based Segmentation Method for Malignant Melanoma Images. ENTROPY 2022; 24:e24060783. [PMID: 35741504 PMCID: PMC9222744 DOI: 10.3390/e24060783] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 05/23/2022] [Accepted: 05/30/2022] [Indexed: 02/07/2023]
Abstract
Skin lesion segmentation is the first and indispensable step of malignant melanoma recognition and diagnosis. At present, most of the existing skin lesions segmentation techniques often used traditional methods like optimum thresholding, etc., and deep learning methods like U-net, etc. However, the edges of skin lesions in malignant melanoma images are gradually changed in color, and this change is nonlinear. The existing methods can not effectively distinguish banded edges between lesion areas and healthy skin areas well. Aiming at the uncertainty and fuzziness of banded edges, the neutrosophic set theory is used in this paper which is better than fuzzy theory to deal with banded edge segmentation. Therefore, we proposed a neutrosophy domain-based segmentation method that contains six steps. Firstly, an image is converted into three channels and the pixel matrix of each channel is obtained. Secondly, the pixel matrixes are converted into Neutrosophic Set domain by using the neutrosophic set conversion method to express the uncertainty and fuzziness of banded edges of malignant melanoma images. Thirdly, a new Neutrosophic Entropy model is proposed to combine the three memberships according to some rules by using the transformations in the neutrosophic space to comprehensively express three memberships and highlight the banded edges of the images. Fourthly, the feature augment method is established by the difference of three components. Fifthly, the dilation is used on the neutrosophic entropy matrixes to fill in the noise region. Finally, the image that is represented by transformed matrix is segmented by the Hierarchical Gaussian Mixture Model clustering method to obtain the banded edge of the image. Qualitative and quantitative experiments are performed on malignant melanoma image dataset to evaluate the performance of the NeDSeM method. Compared with some state-of-the-art methods, our method has achieved good results in terms of performance and accuracy.
Collapse
|
123
|
Liu Y, Zhou J, Liu L, Zhan Z, Hu Y, Fu Y, Duan H. FCP-Net: A Feature-Compression-Pyramid Network Guided by Game-Theoretic Interactions for Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1482-1496. [PMID: 34982679 DOI: 10.1109/tmi.2021.3140120] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Medical image segmentation is a crucial step in diagnosis and analysis of diseases for clinical applications. Deep convolutional neural network methods such as DeepLabv3+ have successfully been applied for medical image segmentation, but multi-level features are seldom integrated seamlessly into different attention mechanisms, and few studies have fully explored the interactions between medical image segmentation and classification tasks. Herein, we propose a feature-compression-pyramid network (FCP-Net) guided by game-theoretic interactions with a hybrid loss function (HLF) for the medical image segmentation. The proposed approach consists of segmentation branch, classification branch and interaction branch. In the encoding stage, a new strategy is developed for the segmentation branch by applying three modules, e.g., embedded feature ensemble, dilated spatial mapping and channel attention (DSMCA), and branch layer fusion. These modules allow effective extraction of spatial information, efficient identification of spatial correlation among various features, and fully integration of multi-receptive field features from different branches. In the decoding stage, a DSMCA module and a multi-scale feature fusion module are used to establish multiple skip connections for enhancing fusion features. Classification and interaction branches are introduced to explore the potential benefits of the classification information task to the segmentation task. We further explore the interactions of segmentation and classification branches from a game theoretic view, and design an HLF. Based on this HLF, the segmentation, classification and interaction branches can collaboratively learn and teach each other throughout the training process, thus applying the conjoint information between the segmentation and classification tasks and improving the generalization performance. The proposed model has been evaluated using several datasets, including ISIC2017, ISIC2018, REFUGE, Kvasir-SEG, BUSI, and PH2, and the results prove its competitiveness compared with other state-of-the-art techniques.
Collapse
|
124
|
Xue C, Yu L, Chen P, Dou Q, Heng PA. Robust Medical Image Classification From Noisy Labeled Data With Global and Local Representation Guided Co-Training. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1371-1382. [PMID: 34982680 DOI: 10.1109/tmi.2021.3140140] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep neural networks have achieved remarkable success in a wide variety of natural image and medical image computing tasks. However, these achievements indispensably rely on accurately annotated training data. If encountering some noisy-labeled images, the network training procedure would suffer from difficulties, leading to a sub-optimal classifier. This problem is even more severe in the medical image analysis field, as the annotation quality of medical images heavily relies on the expertise and experience of annotators. In this paper, we propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification from noisy-labeled data to combat the lack of high quality annotated medical data. Specifically, we employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples. Then, the clean samples are trained by a collaborative training strategy to eliminate the disturbance from imperfect labeled samples. Notably, we further design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples in a self-supervised manner. We evaluated our proposed robust learning strategy on four public medical image classification datasets with three types of label noise, i.e., random noise, computer-generated label noise, and inter-observer variability noise. Our method outperforms other learning from noisy label methods and we also conducted extensive experiments to analyze each component of our method.
Collapse
|
125
|
Medical Image Classification Utilizing Ensemble Learning and Levy Flight-Based Honey Badger Algorithm on 6G-Enabled Internet of Things. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:5830766. [PMID: 35676950 PMCID: PMC9168094 DOI: 10.1155/2022/5830766] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 03/20/2022] [Accepted: 04/30/2022] [Indexed: 12/23/2022]
Abstract
Recently, the 6G-enabled Internet of Medical Things (IoMT) has played a key role in the development of functional health systems due to the massive data generated daily from the hospitals. Therefore, the automatic detection and prediction of future risks such as pneumonia and retinal diseases are still under research and study. However, traditional approaches did not yield good results for accurate diagnosis. In this paper, a robust 6G-enabled IoMT framework is proposed for medical image classification with an ensemble learning (EL)-based model. EL is achieved using MobileNet and DenseNet architecture as a feature extraction backbone. In addition, the developed framework uses a modified honey badger algorithm (HBA) based on Levy flight (LFHBA) as a feature selection method that aims to remove the irrelevant features from those extracted features using the EL model. For evaluation of the performance of the proposed framework, the chest X-ray (CXR) dataset and the optical coherence tomography (OCT) dataset were employed. The accuracy of our technique was 87.10% on the CXR dataset and 94.32% on OCT dataset—both very good results. Compared to other current methods, the proposed method is more accurate and efficient than other well-known and popular algorithms.
Collapse
|
126
|
Kaur R, GholamHosseini H, Sinha R, Lindén M. Automatic lesion segmentation using atrous convolutional deep neural networks in dermoscopic skin cancer images. BMC Med Imaging 2022; 22:103. [PMID: 35644612 PMCID: PMC9148511 DOI: 10.1186/s12880-022-00829-y] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Accepted: 04/13/2022] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Melanoma is the most dangerous and aggressive form among skin cancers, exhibiting a high mortality rate worldwide. Biopsy and histopathological analysis are standard procedures for skin cancer detection and prevention in clinical settings. A significant step in the diagnosis process is the deep understanding of the patterns, size, color, and structure of lesions based on images obtained through dermatoscopes for the infected area. However, the manual segmentation of the lesion region is time-consuming because the lesion evolves and changes its shape over time, making its prediction challenging. Moreover, it is challenging to predict melanoma at the initial stage as it closely resembles other skin cancer types that are not malignant as melanoma; thus, automatic segmentation techniques are required to design a computer-aided system for accurate and timely detection. METHODS As deep learning approaches have gained significant attention in recent years due to their remarkable performance, therefore, in this work, we proposed a novel design of a convolutional neural network (CNN) framework based on atrous convolutions for automatic lesion segmentation. This architecture is built based on the concept of atrous/dilated convolutions which are effective for semantic segmentation. A deep neural network is designed from scratch employing several building blocks consisting of convolutional, batch normalization, leakyReLU layer, and fine-tuned hyperparameters contributing altogether towards higher performance. CONCLUSION The network was tested on three benchmark datasets provided by International Skin Imaging Collaboration (ISIC), i.e., ISIC 2016, ISIC 2017, and ISIC 2018. The experimental results showed that the proposed network achieved an average Jaccard index of 90.4% on ISIC 2016, 81.8% on ISIC 2017, and 89.1% on ISIC 2018 datasets, respectively which is recorded as higher than the top three winners of the ISIC challenge and other state-of-the-art methods. Also, the model successfully extracts lesions from the whole image in one pass in less time, requiring no pre-processing step. The conclusions yielded that network is accurate in performing lesion segmentation on adopted datasets.
Collapse
Affiliation(s)
- Ranpreet Kaur
- School of Engineering, Computer, and Mathematical Sciences, Auckland University of Technology, 55 Wellesley street, 1010 Auckland, New Zealand
| | - Hamid GholamHosseini
- School of Engineering, Computer, and Mathematical Sciences, Auckland University of Technology, 55 Wellesley street, 1010 Auckland, New Zealand
| | - Roopak Sinha
- School of Engineering, Computer, and Mathematical Sciences, Auckland University of Technology, 55 Wellesley street, 1010 Auckland, New Zealand
| | - Maria Lindén
- School of Innovation Design and Engineering, Mälardalen University, Västerås, Sweden
| |
Collapse
|
127
|
Yao P, Shen S, Xu M, Liu P, Zhang F, Xing J, Shao P, Kaffenberger B, Xu RX. Single Model Deep Learning on Imbalanced Small Datasets for Skin Lesion Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1242-1254. [PMID: 34928791 DOI: 10.1109/tmi.2021.3136682] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep convolutional neural network (DCNN) models have been widely explored for skin disease diagnosis and some of them have achieved the diagnostic outcomes comparable or even superior to those of dermatologists. However, broad implementation of DCNN in skin disease detection is hindered by small size and data imbalance of the publically accessible skin lesion datasets. This paper proposes a novel single-model based strategy for classification of skin lesions on small and imbalanced datasets. First, various DCNNs are trained on different small and imbalanced datasets to verify that the models with moderate complexity outperform the larger models. Second, regularization DropOut and DropBlock are added to reduce overfitting and a Modified RandAugment augmentation strategy is proposed to deal with the defects of sample underrepresentation in the small dataset. Finally, a novel Multi-Weighted New Loss (MWNL) function and an end-to-end cumulative learning strategy (CLS) are introduced to overcome the challenge of uneven sample size and classification difficulty and to reduce the impact of abnormal samples on training. By combining Modified RandAugment, MWNL and CLS, our single DCNN model method achieved the classification accuracy comparable or superior to those of multiple ensembling models on different dermoscopic image datasets. Our study shows that this method is able to achieve a high classification performance at a low cost of computational resources and inference time, potentially suitable to implement in mobile devices for automated screening of skin lesions and many other malignancies in low resource settings.
Collapse
|
128
|
A Transfer-Learning-Based Novel Convolution Neural Network for Melanoma Classification. COMPUTERS 2022. [DOI: 10.3390/computers11050064] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Skin cancer is one of the most common human malignancies, which is generally diagnosed by screening and dermoscopic analysis followed by histopathological assessment and biopsy. Deep-learning-based methods have been proposed for skin lesion classification in the last few years. The major drawback of all methods is that they require a considerable amount of training data, which poses a challenge for classifying medical images as limited datasets are available. The problem can be tackled through transfer learning, in which a model pre-trained on a huge dataset is utilized and fine-tuned as per the problem domain. This paper proposes a new Convolution neural network architecture to classify skin lesions into two classes: benign and malignant. The Google Xception model is used as a base model on top of which new layers are added and then fine-tuned. The model is optimized using various optimizers to achieve the maximum possible performance gain for the classifier output. The results on ISIC archive data for the model achieved the highest training accuracy of 99.78% using Adam and LazyAdam optimizers, validation and test accuracy of 97.94% and 96.8% using RMSProp, and on the HAM10000 dataset utilizing the RMSProp optimizer, the model achieved the highest training and prediction accuracy of 98.81% and 91.54% respectively, when compared to other models.
Collapse
|
129
|
A Dermoscopic Inspired System for Localization and Malignancy Classification of Melanocytic Lesions. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This study aims at developing a clinically oriented automated diagnostic tool for distinguishing malignant melanocytic lesions from benign melanocytic nevi in diverse image databases. Due to the presence of artifacts, smooth lesion boundaries, and subtlety in diagnostic features, the accuracy of such systems gets hampered. Thus, the proposed framework improves the accuracy of melanoma detection by combining the clinical aspects of dermoscopy. Two methods have been adopted for achieving the aforementioned objective. Firstly, artifact removal and lesion localization are performed. In the second step, various clinically significant features such as shape, color, texture, and pigment network are detected. Features are further reduced by checking their individual significance (i.e., hypothesis testing). These reduced feature vectors are then classified using SVM classifier. Features specific to the domain have been used for this design as opposed to features of the abstract images. The domain knowledge of an expert gets enhanced by this methodology. The proposed approach is implemented on a multi-source dataset (PH2 + ISBI 2016 and 2017) of 515 annotated images, thereby resulting in sensitivity, specificity and accuracy of 83.8%, 88.3%, and 86%, respectively. The experimental results are promising, and can be applied to detect asymmetry, pigment network, colors, and texture of the lesions.
Collapse
|
130
|
Fine-Tuned DenseNet-169 for Breast Cancer Metastasis Prediction Using FastAI and 1-Cycle Policy. SENSORS 2022; 22:s22082988. [PMID: 35458972 PMCID: PMC9025766 DOI: 10.3390/s22082988] [Citation(s) in RCA: 54] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 04/09/2022] [Accepted: 04/12/2022] [Indexed: 12/02/2022]
Abstract
Lymph node metastasis in breast cancer may be accurately predicted using a DenseNet-169 model. However, the current system for identifying metastases in a lymph node is manual and tedious. A pathologist well-versed with the process of detection and characterization of lymph nodes goes through hours investigating histological slides. Furthermore, because of the massive size of most whole-slide images (WSI), it is wise to divide a slide into batches of small image patches and apply methods independently on each patch. The present work introduces a novel method for the automated diagnosis and detection of metastases from whole slide images using the Fast AI framework and the 1-cycle policy. Additionally, it compares this new approach to previous methods. The proposed model has surpassed other state-of-art methods with more than 97.4% accuracy. In addition, a mobile application is developed for prompt and quick response. It collects user information and models to diagnose metastases present in the early stages of cancer. These results indicate that the suggested model may assist general practitioners in accurately analyzing breast cancer situations, hence preventing future complications and mortality. With digital image processing, histopathologic interpretation and diagnostic accuracy have improved considerably.
Collapse
|
131
|
Superpixel-Oriented Label Distribution Learning for Skin Lesion Segmentation. Diagnostics (Basel) 2022; 12:diagnostics12040938. [PMID: 35453986 PMCID: PMC9026477 DOI: 10.3390/diagnostics12040938] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Revised: 03/31/2022] [Accepted: 04/06/2022] [Indexed: 02/04/2023] Open
Abstract
Lesion segmentation is a critical task in skin cancer analysis and detection. When developing deep learning-based segmentation methods, we need a large number of human-annotated labels to serve as ground truth for model-supervised learning. Due to the complexity of dermatological images and the subjective differences of different dermatologists in decision-making, the labels in the segmentation target boundary region are prone to produce uncertain labels or error labels. These labels may lead to unsatisfactory performance of dermoscopy segmentation. In addition, the model trained by the errored one-hot label may be overconfident, which can lead to arbitrary prediction and model overfitting. In this paper, a superpixel-oriented label distribution learning method is proposed. The superpixels formed by the simple linear iterative cluster (SLIC) algorithm combine one-hot labels constraint and define a distance function to convert it into a soft probability distribution. Referring to the model structure of knowledge distillation, after Superpixel-oriented label distribution learning, we get soft labels with structural prior information. Then the soft labels are transferred as new knowledge to the lesion segmentation network for training. Ours method on ISIC 2018 datasets achieves an Dice coefficient reaching 84%, sensitivity 79.6%, precision 80.4%, improved by 19.3%, 8.6% and 2.5% respectively in comparison with the results of U-Net. We also evaluate our method on the tasks of skin lesion segmentation via several general neural network architectures. The experiments show that ours method improves the performance of network image segmentation and can be easily integrated into most existing deep learning architectures.
Collapse
|
132
|
Dual attention based network for skin lesion classification with auxiliary learning. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103549] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
133
|
Shreve JT, Khanani SA, Haddad TC. Artificial Intelligence in Oncology: Current Capabilities, Future Opportunities, and Ethical Considerations. Am Soc Clin Oncol Educ Book 2022; 42:1-10. [PMID: 35687826 DOI: 10.1200/edbk_350652] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
The promise of highly personalized oncology care using artificial intelligence (AI) technologies has been forecasted since the emergence of the field. Cumulative advances across the science are bringing this promise to realization, including refinement of machine learning- and deep learning algorithms; expansion in the depth and variety of databases, including multiomics; and the decreased cost of massively parallelized computational power. Examples of successful clinical applications of AI can be found throughout the cancer continuum and in multidisciplinary practice, with computer vision-assisted image analysis in particular having several U.S. Food and Drug Administration-approved uses. Techniques with emerging clinical utility include whole blood multicancer detection from deep sequencing, virtual biopsies, natural language processing to infer health trajectories from medical notes, and advanced clinical decision support systems that combine genomics and clinomics. Substantial issues have delayed broad adoption, with data transparency and interpretability suffering from AI's "black box" mechanism, and intrinsic bias against underrepresented persons limiting the reproducibility of AI models and perpetuating health care disparities. Midfuture projections of AI maturation involve increasing a model's complexity by using multimodal data elements to better approximate an organic system. Far-future positing includes living databases that accumulate all aspects of a person's health into discrete data elements; this will fuel highly convoluted modeling that can tailor treatment selection, dose determination, surveillance modality and schedule, and more. The field of AI has had a historical dichotomy between its proponents and detractors. The successful development of recent applications, and continued investment in prospective validation that defines their impact on multilevel outcomes, has established a momentum of accelerated progress.
Collapse
Affiliation(s)
| | | | - Tufia C Haddad
- Department of Oncology, Mayo Clinic, Rochester, MN.,Center for Digital Health, Mayo Clinic, Rochester, MN
| |
Collapse
|
134
|
Hosny KM, Kassem MA. Refined Residual Deep Convolutional Network for Skin Lesion Classification. J Digit Imaging 2022; 35:258-280. [PMID: 35018536 PMCID: PMC8921379 DOI: 10.1007/s10278-021-00552-0] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 11/16/2021] [Accepted: 11/17/2021] [Indexed: 10/19/2022] Open
Abstract
Skin cancer is the most common type of cancer that affects humans and is usually diagnosed by initial clinical screening, which is followed by dermoscopic analysis. Automated classification of skin lesions is still a challenging task because of the high visual similarity between melanoma and benign lesions. This paper proposes a new residual deep convolutional neural network (RDCNN) for skin lesions diagnosis. The proposed neural network is trained and tested using six well-known skin cancer datasets, PH2, DermIS and Quest, MED-NODE, ISIC2016, ISIC2017, and ISIC2018. Three different experiments are carried out to measure the performance of the proposed RDCNN. In the first experiment, the proposed RDCNN is trained and tested using the original dataset images without any pre-processing or segmentation. In the second experiment, the proposed RDCNN is tested using segmented images. Finally, the utilized trained model in the second experiment is saved and reused in the third experiment as a pre-trained model. Then, it is trained again using a different dataset. The proposed RDCNN shows significant high performance and outperforms the existing deep convolutional networks.
Collapse
Affiliation(s)
- Khalid M. Hosny
- Department of Information Technology, Faculty of Computers and Informatics, Zagazig University, Zagazig, Egypt
| | - Mohamed A. Kassem
- Department of Robotics and Intelligent Machines, Director of the Quality Assurance Unit, Faculty of Artificial Intelligence, Kafrelsheikh University, Kafr el-Sheikh, Egypt
| |
Collapse
|
135
|
Liu P, Zheng G. Handling Imbalanced Data: Uncertainty-guided Virtual Adversarial Training with Batch Nuclear-norm Optimization for Semi-supervised Medical Image Classification. IEEE J Biomed Health Inform 2022; 26:2983-2994. [PMID: 35344500 DOI: 10.1109/jbhi.2022.3162748] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In many clinical settings, a lot of medical image datasets suffer from imbalance problems, which makes predictions of trained models to be biased toward majority classes. Semi-supervised Learning (SSL) algorithms trained with such imbalanced datasets become more problematic since pseudo-supervision of unlabeled data are generated from the model's biased predictions. To address these issues, in this work, we propose a novel semi-supervised deep learning method, i.e., uncertainty-guided virtual adversarial training (VAT) with batch nuclear-norm (BNN) optimization, for large-scale medical image classification. To effectively exploit useful information from both labeled and unlabeled data, we leverage VAT and BNN optimization to harness the underlying knowledge, which helps to improve discriminability, diversity and generalization of the trained models. More concretely, our network is trained by minimizing a combination of four types of losses, including a supervised cross-entropy loss, a BNN loss defined on the output matrix of labeled data batch (lBNN loss), a negative BNN loss defined on the output matrix of unlabeled data batch (uBNN loss), and a VAT loss on both labeled and unlabeled data. We additionally propose to use uncertainty estimation to filter out unlabeled samples near the decision boundary when computing the VAT loss. We conduct comprehensive experiments to evaluate the performance of our method on two publicly available datasets and one in-house collected dataset. The experimental results demonstrated that our method achieved better results than state-of-the-art SSL methods.
Collapse
|
136
|
Attention Module Magnetic Flux Leakage Linked Deep Residual Network for Pipeline In-Line Inspection. SENSORS 2022; 22:s22062230. [PMID: 35336400 PMCID: PMC8949419 DOI: 10.3390/s22062230] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 03/10/2022] [Accepted: 03/10/2022] [Indexed: 12/04/2022]
Abstract
Pipeline operational safety is the foundation of the pipeline industry. Inspection and evaluation of defects is an important means of ensuring the safe operation of pipelines. In-line inspection of Magnetic Flux Leakage (MFL) can be used to identify and analyze potential defects. For pipeline MFL identification with inspecting in long distance, there exists the issues of low identification efficiency, misjudgment and leakage judgment. To solve these problems, a pipeline MFL inspection signal identification method based on improved deep residual convolutional neural network and attention module is proposed. A improved deep residual network based on the VGG16 convolution neural network is constructed to automatically learn the features from the MFL image signals and perform the identification of pipeline features and defects. The attention modules are introduced to reduce the influence of noises and compound features on the identification results in the process of in-line inspection. The actual pipeline in-line inspection experimental results show that the proposed method can accurately classify the MFL in-line inspection image signals and effectively reduce the influence of noises on the feature identification results with an average classification accuracy of 97.7%. This method can effectively improve identification accuracy and efficiency of the pipeline MFL in-line inspection.
Collapse
|
137
|
Melanoma segmentation using deep learning with test-time augmentations and conditional random fields. Sci Rep 2022; 12:3948. [PMID: 35273282 PMCID: PMC8913825 DOI: 10.1038/s41598-022-07885-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Accepted: 01/31/2022] [Indexed: 11/08/2022] Open
Abstract
In a computer-aided diagnostic (CAD) system for skin lesion segmentation, variations in shape and size of the skin lesion makes the segmentation task more challenging. Lesion segmentation is an initial step in CAD schemes as it leads to low error rates in quantification of the structure, boundary, and scale of the skin lesion. Subjective clinical assessment of the skin lesion segmentation results provided by current state-of-the-art deep learning segmentation techniques does not offer the required results as per the inter-observer agreement of expert dermatologists. This study proposes a novel deep learning-based, fully automated approach to skin lesion segmentation, including sophisticated pre and postprocessing approaches. We use three deep learning models, including UNet, deep residual U-Net (ResUNet), and improved ResUNet (ResUNet++). The preprocessing phase combines morphological filters with an inpainting algorithm to eliminate unnecessary hair structures from the dermoscopic images. Finally, we used test time augmentation (TTA) and conditional random field (CRF) in the postprocessing stage to improve segmentation accuracy. The proposed method was trained and evaluated on ISIC-2016 and ISIC-2017 skin lesion datasets. It achieved an average Jaccard Index of 85.96% and 80.05% for ISIC-2016 and ISIC-2017 datasets, when trained individually. When trained on combined dataset (ISIC-2016 and ISIC-2017), the proposed method achieved an average Jaccard Index of 80.73% and 90.02% on ISIC-2017 and ISIC-2016 testing datasets. The proposed methodological framework can be used to design a fully automated computer-aided skin lesion diagnostic system due to its high scalability and robustness.
Collapse
|
138
|
Bardou D, Bouaziz H, Lv L, Zhang T. Hair removal in dermoscopy images using variational autoencoders. Skin Res Technol 2022; 28:445-454. [PMID: 35254677 PMCID: PMC9907627 DOI: 10.1111/srt.13145] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 01/17/2022] [Indexed: 01/23/2023]
Abstract
BACKGROUND In recent years, melanoma is rising at a faster rate compared to other cancers. Although it is the most serious type of skin cancer, the diagnosis at early stages makes it curable. Dermoscopy is a reliable medical technique used to detect melanoma by using a dermoscope to examine the skin. In the last few decades, digital imaging devices have made great progress which allowed capturing and storing high-quality images from these examinations. The stored images are now being standardized and used for the automatic detection of melanoma. However, when the hair covers the skin, this makes the task challenging. Therefore, it is important to eliminate the hair to get accurate results. METHODS In this paper, we propose a simple yet efficient method for hair removal using a variational autoencoder without the need for paired samples. The encoder takes as input a dermoscopy image and builds a latent distribution that ignores hair as it is considered noise, while the decoder reconstructs a hair-free image. Both encoder and decoder use a decent convolutional neural networks architecture that provides high performance. The construction of our model comprises two stages of training. In the first stage, the model has trained on hair-occluded images to output hair-free images, and in the second stage, it is optimized using hair-free images to preserve the image textures. Although the variational autoencoder produces hair-free images, it does not maintain the quality of the generated images. Thus, we explored the use of three-loss functions including the structural similarity index (SSIM), L1-norm, and L2-norm to improve the visual quality of the generated images. RESULTS The evaluation of the hair-free reconstructed images is carried out using t-distributed stochastic neighbor embedding (SNE) feature mapping by visualizing the distribution of the real hair-free images and the synthesized hair-free images. The conducted experiments on the publicly available dataset HAM10000 show that our method is very efficient.
Collapse
Affiliation(s)
- Dalal Bardou
- Department of Computer Science and Mathematics University of Abbes Laghrour Khenchela Algeria
| | - Hamida Bouaziz
- Mécatronique Laboratory Department of Computer Science Jijel University Jijel Algeria
| | - Laishui Lv
- School of Computer Science and Engineering Nanjing University of Science and Technology Nanjing China
| | - Ting Zhang
- School of Computer Science and Engineering Nanjing University of Science and Technology Nanjing China
| |
Collapse
|
139
|
Assari Z, Mahloojifar A, Ahmadinejad N. Discrimination of benign and malignant solid breast masses using deep residual learning-based bimodal computer-aided diagnosis system. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103453] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
140
|
Yu Z, Nguyen J, Nguyen TD, Kelly J, Mclean C, Bonnington P, Zhang L, Mar V, Ge Z. Early Melanoma Diagnosis With Sequential Dermoscopic Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:633-646. [PMID: 34648437 DOI: 10.1109/tmi.2021.3120091] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Dermatologists often diagnose or rule out early melanoma by evaluating the follow-up dermoscopic images of skin lesions. However, existing algorithms for early melanoma diagnosis are developed using single time-point images of lesions. Ignoring the temporal, morphological changes of lesions can lead to misdiagnosis in borderline cases. In this study, we propose a framework for automated early melanoma diagnosis using sequential dermoscopic images. To this end, we construct our method in three steps. First, we align sequential dermoscopic images of skin lesions using estimated Euclidean transformations, extract the lesion growth region by computing image differences among the consecutive images, and then propose a spatio-temporal network to capture the dermoscopic changes from aligned lesion images and the corresponding difference images. Finally, we develop an early diagnosis module to compute probability scores of malignancy for lesion images over time. We collected 179 serial dermoscopic imaging data from 122 patients to verify our method. Extensive experiments show that the proposed model outperforms other commonly used sequence models. We also compared the diagnostic results of our model with those of seven experienced dermatologists and five registrars. Our model achieved higher diagnostic accuracy than clinicians (63.69% vs. 54.33%, respectively) and provided an earlier diagnosis of melanoma (60.7% vs. 32.7% of melanoma correctly diagnosed on the first follow-up images). These results demonstrate that our model can be used to identify melanocytic lesions that are at high-risk of malignant transformation earlier in the disease process and thereby redefine what is possible in the early detection of melanoma.
Collapse
|
141
|
Shi Y, Yao X, Xu J, Hu X, Tu L, Lan F, Cui J, Cui L, Huang J, Li J, Bi Z, Li J. A New Approach of Fatigue Classification Based on Data of Tongue and Pulse With Machine Learning. Front Physiol 2022; 12:708742. [PMID: 35197858 PMCID: PMC8859319 DOI: 10.3389/fphys.2021.708742] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 11/03/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Fatigue is a common and subjective symptom, which is associated with many diseases and suboptimal health status. A reliable and evidence-based approach is lacking to distinguish disease fatigue and non-disease fatigue. This study aimed to establish a method for early differential diagnosis of fatigue, which can be used to distinguish disease fatigue from non-disease fatigue, and to investigate the feasibility of characterizing fatigue states in a view of tongue and pulse data analysis. METHODS Tongue and Face Diagnosis Analysis-1 (TFDA-1) instrument and Pulse Diagnosis Analysis-1 (PDA-1) instrument were used to collect tongue and pulse data. Four machine learning models were used to perform classification experiments of disease fatigue vs. non-disease fatigue. RESULTS The results showed that all the four classifiers over "Tongue & Pulse" joint data showed better performances than those only over tongue data or only over pulse data. The model accuracy rates based on logistic regression, support vector machine, random forest, and neural network were (85.51 ± 1.87)%, (83.78 ± 4.39)%, (83.27 ± 3.48)% and (85.82 ± 3.01)%, and with Area Under Curve estimates of 0.9160 ± 0.0136, 0.9106 ± 0.0365, 0.8959 ± 0.0254 and 0.9239 ± 0.0174, respectively. CONCLUSION This study proposed and validated an innovative, non-invasive differential diagnosis approach. Results suggest that it is feasible to characterize disease fatigue and non-disease fatigue by using objective tongue data and pulse data.
Collapse
Affiliation(s)
- Yulin Shi
- Basic Medical College, Shanghai University of Traditional Chinese Medicine, Pudong, China
| | - Xinghua Yao
- Basic Medical College, Shanghai University of Traditional Chinese Medicine, Pudong, China
| | - Jiatuo Xu
- Basic Medical College, Shanghai University of Traditional Chinese Medicine, Pudong, China
| | - Xiaojuan Hu
- Shanghai Innovation Center of TCM Health Service, Shanghai University of Traditional Chinese Medicine, Pudong, China
| | - Liping Tu
- Basic Medical College, Shanghai University of Traditional Chinese Medicine, Pudong, China
| | - Fang Lan
- Basic Medical College, Shanghai University of Traditional Chinese Medicine, Pudong, China
| | - Ji Cui
- Basic Medical College, Shanghai University of Traditional Chinese Medicine, Pudong, China
| | - Longtao Cui
- Basic Medical College, Shanghai University of Traditional Chinese Medicine, Pudong, China
| | - Jingbin Huang
- Basic Medical College, Shanghai University of Traditional Chinese Medicine, Pudong, China
| | - Jun Li
- Basic Medical College, Shanghai University of Traditional Chinese Medicine, Pudong, China
| | - Zijuan Bi
- Basic Medical College, Shanghai University of Traditional Chinese Medicine, Pudong, China
| | - Jiacai Li
- Basic Medical College, Shanghai University of Traditional Chinese Medicine, Pudong, China
| |
Collapse
|
142
|
Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. ROFO-FORTSCHR RONTG 2022; 194:605-612. [PMID: 35211929 DOI: 10.1055/a-1718-4128] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Machine learning (ML) is considered an important technology for future data analysis in health care. METHODS The inherently technology-driven fields of diagnostic radiology and nuclear medicine will both benefit from ML in terms of image acquisition and reconstruction. Within the next few years, this will lead to accelerated image acquisition, improved image quality, a reduction of motion artifacts and - for PET imaging - reduced radiation exposure and new approaches for attenuation correction. Furthermore, ML has the potential to support decision making by a combined analysis of data derived from different modalities, especially in oncology. In this context, we see great potential for ML in multiparametric hybrid imaging and the development of imaging biomarkers. RESULTS AND CONCLUSION In this review, we will describe the basics of ML, present approaches in hybrid imaging of MRI, CT, and PET, and discuss the specific challenges associated with it and the steps ahead to make ML a diagnostic and clinical tool in the future. KEY POINTS · ML provides a viable clinical solution for the reconstruction, processing, and analysis of hybrid imaging obtained from MRI, CT, and PET.. CITATION FORMAT · Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. Fortschr Röntgenstr 2022; DOI: 10.1055/a-1718-4128.
Collapse
Affiliation(s)
- Thomas Küstner
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Tobias Hepp
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Ferdinand Seith
- Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| |
Collapse
|
143
|
Does a Previous Segmentation Improve the Automatic Detection of Basal Cell Carcinoma Using Deep Neural Networks? APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12042092] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Basal Cell Carcinoma (BCC) is the most frequent skin cancer and its increasing incidence is producing a high overload in dermatology services. In this sense, it is convenient to aid physicians in detecting it soon. Thus, in this paper, we propose a tool for the detection of BCC to provide a prioritization in the teledermatology consultation. Firstly, we analyze if a previous segmentation of the lesion improves the ulterior classification of the lesion. Secondly, we analyze three deep neural networks and ensemble architectures to distinguish between BCC and nevus, and BCC and other skin lesions. The best segmentation results are obtained with a SegNet deep neural network. A 98% accuracy for distinguishing BCC from nevus and a 95% accuracy classifying BCC vs. all lesions have been obtained. The proposed algorithm outperforms the winner of the challenge ISIC 2019 in almost all the metrics. Finally, we can conclude that when deep neural networks are used to classify, a previous segmentation of the lesion does not improve the classification results. Likewise, the ensemble of different neural network configurations improves the classification performance compared with individual neural network classifiers. Regarding the segmentation step, supervised deep learning-based methods outperform unsupervised ones.
Collapse
|
144
|
Wang R, Chen S, Ji C, Fan J, Li Y. Boundary-Aware Context Neural Network for Medical Image Segmentation. Med Image Anal 2022; 78:102395. [DOI: 10.1016/j.media.2022.102395] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2020] [Revised: 02/07/2022] [Accepted: 02/12/2022] [Indexed: 12/13/2022]
|
145
|
Ding J, Song J, Li J, Tang J, Guo F. Two-Stage Deep Neural Network via Ensemble Learning for Melanoma Classification. Front Bioeng Biotechnol 2022; 9:758495. [PMID: 35118054 PMCID: PMC8804371 DOI: 10.3389/fbioe.2021.758495] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Accepted: 12/06/2021] [Indexed: 11/13/2022] Open
Abstract
Melanoma is a skin disease with a high fatality rate. Early diagnosis of melanoma can effectively increase the survival rate of patients. There are three types of dermoscopy images, malignant melanoma, benign nevis, and seborrheic keratosis, so using dermoscopy images to classify melanoma is an indispensable task in diagnosis. However, early melanoma classification works can only use the low-level information of images, so the melanoma cannot be classified efficiently; the recent deep learning methods mainly depend on a single network, although it can extract high-level features, the poor scale and type of the features limited the results of the classification. Therefore, we need an automatic classification method for melanoma, which can make full use of the rich and deep feature information of images for classification. In this study, we propose an ensemble method that can integrate different types of classification networks for melanoma classification. Specifically, we first use U-net to segment the lesion area of images to generate a lesion mask, thus resize images to focus on the lesion; then, we use five excellent classification models to classify dermoscopy images, and adding squeeze-excitation block (SE block) to models to emphasize the more informative features; finally, we use our proposed new ensemble network to integrate five different classification results. The experimental results prove the validity of our results. We test our method on the ISIC 2017 challenge dataset and obtain excellent results on multiple metrics; especially, we get 0.909 on accuracy. Our classification framework can provide an efficient and accurate way for melanoma classification using dermoscopy images, laying the foundation for early diagnosis and later treatment of melanoma.
Collapse
Affiliation(s)
- Jiaqi Ding
- School of Computer Science and Technology, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Jie Song
- School of Computer Science and Technology, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Jiawei Li
- School of Computer Science and Technology, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Jijun Tang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Fei Guo
- School of Computer Science and Engineering, Central South University, Changsha, China
| |
Collapse
|
146
|
Popescu D, El-Khatib M, El-Khatib H, Ichim L. New Trends in Melanoma Detection Using Neural Networks: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:496. [PMID: 35062458 PMCID: PMC8778535 DOI: 10.3390/s22020496] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 12/28/2021] [Accepted: 01/05/2022] [Indexed: 12/29/2022]
Abstract
Due to its increasing incidence, skin cancer, and especially melanoma, is a serious health disease today. The high mortality rate associated with melanoma makes it necessary to detect the early stages to be treated urgently and properly. This is the reason why many researchers in this domain wanted to obtain accurate computer-aided diagnosis systems to assist in the early detection and diagnosis of such diseases. The paper presents a systematic review of recent advances in an area of increased interest for cancer prediction, with a focus on a comparative perspective of melanoma detection using artificial intelligence, especially neural network-based systems. Such structures can be considered intelligent support systems for dermatologists. Theoretical and applied contributions were investigated in the new development trends of multiple neural network architecture, based on decision fusion. The most representative articles covering the area of melanoma detection based on neural networks, published in journals and impact conferences, were investigated between 2015 and 2021, focusing on the interval 2018-2021 as new trends. Additionally presented are the main databases and trends in their use in teaching neural networks to detect melanomas. Finally, a research agenda was highlighted to advance the field towards the new trends.
Collapse
Affiliation(s)
- Dan Popescu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (M.E.-K.); (H.E.-K.); (L.I.)
| | | | | | | |
Collapse
|
147
|
An Improved and Robust Encoder–Decoder for Skin Lesion Segmentation. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-021-06403-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
148
|
AIM in Oncology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_94] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
149
|
He X, Tan EL, Bi H, Zhang X, Zhao S, Lei B. Fully Transformer Network for Skin Lesion Analysis. Med Image Anal 2022; 77:102357. [DOI: 10.1016/j.media.2022.102357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Revised: 12/26/2021] [Accepted: 01/06/2022] [Indexed: 10/19/2022]
|
150
|
Liu C, Guo Y, Jiang F, Xu L, Shen F, Jin Z, Wang Y. Gastrointestinal stromal tumors diagnosis on multi-center endoscopic ultrasound images using multi-scale image normalization and transfer learning. Technol Health Care 2022; 30:47-59. [PMID: 35124583 PMCID: PMC9028612 DOI: 10.3233/thc-228005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
BACKGROUND Automated diagnosis of gastrointestinal stromal tumors' (GISTs) cancerization is an effective way to improve the clinical diagnostic accuracy and reduce possible risks of biopsy. Although deep convolutional neural networks (DCNNs) have proven to be very effective in many image classification problems, there is still a lack of studies on endoscopic ultrasound (EUS) images of GISTs. It remains a substantial challenge mainly due to the data distribution bias of multi-center images, the significant inter-class similarity and intra-class variation, and the insufficiency of training data. OBJECTIVE The study aims to classify GISTs into higher-risk and lower-risk categories. METHODS Firstly, a novel multi-scale image normalization block is designed to perform same-size and same-resolution resizing on the input data in a parallel manner. A dilated mask is used to obtain a more accurate interested region. Then, we construct a multi-way feature extraction and fusion block to extract distinguishable features. A ResNet-50 model built based on transfer learning is utilized as a powerful feature extractor for tumors' textural features. The tumor size features and the patient demographic features are also extracted respectively. Finally, a robust XGBoost classifier is trained on all features. RESULTS Experimental results show that our proposed method achieves the AUC score of 0.844, which is superior to the clinical diagnosis performance. CONCLUSIONS Therefore, the results have provided a solid baseline to encourage further researches in this field.
Collapse
Affiliation(s)
- Chengcheng Liu
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Yi Guo
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Fei Jiang
- Department of Gastroenterology, Changhai Hospital, Shanghai, China
| | - Leiming Xu
- Department of Gastroenterology, Xinhua Hospital Affiliated to Shanghai JiaoTong University of Medicine, Shanghai, China
| | - Feng Shen
- Department of Gastroenterology, Xinhua Hospital Affiliated to Shanghai JiaoTong University of Medicine, Shanghai, China
| | - Zhendong Jin
- Department of Gastroenterology, Changhai Hospital, Shanghai, China
| | - Yuanyuan Wang
- Department of Electronic Engineering, Fudan University, Shanghai, China
| |
Collapse
|