151
|
Nie Y, Sommella P, Carratu M, Ferro M, O'Nils M, Lundgren J. Recent Advances in Diagnosis of Skin Lesions Using Dermoscopic Images Based on Deep Learning. IEEE ACCESS 2022; 10:95716-95747. [DOI: 10.1109/access.2022.3199613] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
Affiliation(s)
- Yali Nie
- Department of Electronics Design, Mid Sweden University, Sundsvall, Sweden
| | - Paolo Sommella
- Department of Industrial Engineering, University of Salerno, Fisciano, Italy
| | - Marco Carratu
- Department of Industrial Engineering, University of Salerno, Fisciano, Italy
| | - Matteo Ferro
- Department of Industrial Engineering, University of Salerno, Fisciano, Italy
| | - Mattias O'Nils
- Department of Electronics Design, Mid Sweden University, Sundsvall, Sweden
| | - Jan Lundgren
- Department of Electronics Design, Mid Sweden University, Sundsvall, Sweden
| |
Collapse
|
152
|
Bektaş J, Bektaş Y, Ersin Kangal E. Integrating a novel SRCRN network for segmentation with representative batch-mode experiments for detecting melanoma. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
153
|
An Effective Skin Disease Segmentation Model based on Deep Convolutional Neural Network. INTERNATIONAL JOURNAL OF INTELLIGENT INFORMATION TECHNOLOGIES 2022. [DOI: 10.4018/ijiit.298695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Automated segmentation of skin lesions as of digitally recorded images is a crucial procedure to diagnose skin diseases accurately. This paper proposes a segmentation model for skin lesions centered on Deep Convolutional Neural Network (DCNN) for melanoma, squamous, basal, keratosis, dermatofibroma, and vascular types of skin diseases. The DCNN is trained from scratch instead of pre-trained networks with different layers among variations in pooling and activation functions. The comparison of the proposed model is made with the winner of the ISIC 2018 challenge task1(skin lesion segmentation) and other methods. The experiments are performed on challenge datasets and shown better segmentation results. The main contribution is developing an automated segmentation model, evaluating performance, and comparing it with other state-of-art methods. The essence of the proposed work is the simple network architecture and its excellent results. It outperforms by obtaining a Jaccard index of 87%, dice similarity coefficient of 91%, the accuracy of 94%, recall of 94% and precision of 89%.
Collapse
|
154
|
Wu G, Chen X, Shi Z, Zhang D, Hu Z, Mao Y, Wang Y, Yu J. Convolutional neural network with coarse-to-fine resolution fusion and residual learning structures for cross-modality image synthesis. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
155
|
Hasan MK, Elahi MTE, Alam MA, Jawad MT, Martí R. DermoExpert: Skin lesion classification using a hybrid convolutional neural network through segmentation, transfer learning, and augmentation. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2021.100819] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
|
156
|
Carvalho R, Morgado AC, Andrade C, Nedelcu T, Carreiro A, Vasconcelos MJM. Integrating Domain Knowledge into Deep Learning for Skin Lesion Risk Prioritization to Assist Teledermatology Referral. Diagnostics (Basel) 2021; 12:36. [PMID: 35054203 PMCID: PMC8775114 DOI: 10.3390/diagnostics12010036] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 12/15/2021] [Accepted: 12/20/2021] [Indexed: 11/23/2022] Open
Abstract
Teledermatology has developed rapidly in recent years and is nowadays an essential tool for early diagnosis. In this work, we aim to improve existing Teledermatology processes for skin lesion diagnosis by developing a deep learning approach for risk prioritization with a dataset of retrospective data from referral requests of the Portuguese National Health System. Given the high complexity of this task, we propose a new prioritization pipeline guided and inspired by domain knowledge. We explored automatic lesion segmentation and tested different learning schemes, namely hierarchical classification and curriculum learning approaches, optionally including additional patient metadata. The final priority level prediction can then be obtained by combining predicted diagnosis and a baseline priority level accounting for explicit expert knowledge. In both the differential diagnosis and prioritization branches, lesion segmentation with 30% tolerance for contextual information was shown to improve classification when compared with a flat baseline model trained on original images; furthermore, the addition of patient information was not beneficial for most experiments. Curriculum learning delivered better results than a flat or hierarchical approach. The combination of diagnosis information and a knowledge map, created in collaboration with dermatologists, together with the priority achieved interesting results (best macro F1 of 43.93% for a validated test set), paving the way for new data-centric and knowledge-driven approaches.
Collapse
Affiliation(s)
| | | | | | | | | | - Maria João M. Vasconcelos
- Fraunhofer Portugal AICOS, Rua Alfredo Allen, 4200-135 Porto, Portugal; (R.C.); (A.C.M.); (C.A.); (T.N.); (A.C.)
| |
Collapse
|
157
|
Deng X, Yin Q, Guo P. Efficient structural pseudoinverse learning-based hierarchical representation learning for skin lesion classification. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-021-00588-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
AbstractThe success of deep learning in skin lesion classification mainly depends on the ultra-deep neural network and the significantly large training data set. Deep learning training is usually time-consuming, and large datasets with labels are hard to obtain, especially skin lesion images. Although pre-training and data augmentation can alleviate these issues, there are still some problems: (1) the data domain is not consistent, resulting in the slow convergence; and (2) low robustness to confusing skin lesions. To solve these problems, we propose an efficient structural pseudoinverse learning-based hierarchical representation learning method. Preliminary feature extraction, shallow network feature extraction and deep learning feature extraction are carried out respectively before the classification of skin lesion images. Gabor filter and pre-trained deep convolutional neural network are used for preliminary feature extraction. The structural pseudoinverse learning (S-PIL) algorithm is used to extract the shallow features. Then, S-PIL preliminarily identifies the skin lesion images that are difficult to be classified to form a new training set for deep learning feature extraction. Through the hierarchical representation learning, we analyze the features of skin lesion images layer by layer to improve the final classification. Our method not only avoid the slow convergence caused by inconsistency of data domain but also enhances the training of confusing examples. Without using additional data, our approach outperforms existing methods in the ISIC 2017 and ISIC 2018 datasets.
Collapse
|
158
|
Bi L, Fulham M, Kim J. Hyper-fusion network for semi-automatic segmentation of skin lesions. Med Image Anal 2021; 76:102334. [PMID: 34923251 DOI: 10.1016/j.media.2021.102334] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Revised: 09/19/2021] [Accepted: 12/08/2021] [Indexed: 11/16/2022]
Abstract
Segmentation of skin lesions is an important step for imaging-based clinical decision support systems. Automatic skin lesion segmentation methods based on fully convolutional networks (FCNs) are regarded as the state-of-the-art for accuracy. When there are, however, insufficient training data to cover all the variations in skin lesions, where lesions from different patients may have major differences in size/shape/texture, these methods failed to segment the lesions that have image characteristics, which are less common in the training datasets. FCN-based semi-automatic segmentation methods, which fuse user-inputs with high-level semantic image features derived from FCNs offer an ideal complement to overcome limitations of automatic segmentation methods. These semi-automatic methods rely on the automated state-of-the-art FCNs coupled with user-inputs for refinements, and therefore being able to tackle challenging skin lesions. However, there are a limited number of FCN-based semi-automatic segmentation methods and all these methods focused on 'early-fusion', where the first few convolutional layers are used to fuse image features and user-inputs and then derive fused image features for segmentation. For early-fusion based methods, because the user-input information can be lost after the first few convolutional layers, consequently, the user-input information will have limited guidance and constraint in segmenting the challenging skin lesions with inhomogeneous textures and fuzzy boundaries. Hence, in this work, we introduce a hyper-fusion network (HFN) to fuse the extracted user-inputs and image features over multiple stages. We separately extract complementary features which then allows for an iterative use of user-inputs along all the fusion stages to refine the segmentation. We evaluated our HFN on three well-established public benchmark datasets - ISBI Skin Lesion Challenge 2017, 2016 and PH2 - and our results show that the HFN is more accurate and generalizable than the state-of-the-art methods, in particular with challenging skin lesions.
Collapse
Affiliation(s)
- Lei Bi
- School of Computer Science, University of Sydney, NSW, Australia.
| | - Michael Fulham
- School of Computer Science, University of Sydney, NSW, Australia; Department of Molecular Imaging, Royal Prince Alfred Hospital, NSW, Australia
| | - Jinman Kim
- School of Computer Science, University of Sydney, NSW, Australia.
| |
Collapse
|
159
|
Huang JS, Liu WS, Yao B, Wang ZX, Chen SF, Sun WF. Electroencephalogram-Based Motor Imagery Classification Using Deep Residual Convolutional Networks. Front Neurosci 2021; 15:774857. [PMID: 34867174 PMCID: PMC8635693 DOI: 10.3389/fnins.2021.774857] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 10/22/2021] [Indexed: 11/25/2022] Open
Abstract
The classification of electroencephalogram (EEG) signals is of significant importance in brain-computer interface (BCI) systems. Aiming to achieve intelligent classification of motor imagery EEG types with high accuracy, a classification methodology using the wavelet packet decomposition (WPD) and the proposed deep residual convolutional networks (DRes-CNN) is proposed. Firstly, EEG waveforms are segmented into sub-signals. Then the EEG signal features are obtained through the WPD algorithm, and some selected wavelet coefficients are retained and reconstructed into EEG signals in their respective frequency bands. Subsequently, the reconstructed EEG signals were utilized as input of the proposed deep residual convolutional networks to classify EEG signals. Finally, EEG types of motor imagination are classified by the DRes-CNN classifier intelligently. The datasets from BCI Competition were used to test the performance of the proposed deep learning classifier. Classification experiments show that the average recognition accuracy of this method reaches 98.76%. The proposed method can be further applied to the BCI system of motor imagination control.
Collapse
Affiliation(s)
- Jing-Shan Huang
- School of Aerospace Engineering, Xiamen University, Xiamen, China.,Shenzhen Research Institute of Xiamen University, Shenzhen, China
| | - Wan-Shan Liu
- School of Aerospace Engineering, Xiamen University, Xiamen, China.,Shenzhen Research Institute of Xiamen University, Shenzhen, China
| | - Bin Yao
- School of Aerospace Engineering, Xiamen University, Xiamen, China.,Shenzhen Research Institute of Xiamen University, Shenzhen, China
| | - Zhan-Xiang Wang
- Institute of Neurosurgery, School of Medicine, Xiamen University, Xiamen, China.,Xiamen Key Laboratory of Brain Center, The First Affiliated Hospital of Xiamen University, Xiamen, China.,Department of Neurosurgery, The First Affiliated Hospital of Xiamen University, Xiamen, China
| | - Si-Fang Chen
- Department of Neurosurgery, The First Affiliated Hospital of Xiamen University, Xiamen, China
| | - Wei-Fang Sun
- College of Mechanical and Electrical Engineering, Wenzhou University, Wenzhou, China
| |
Collapse
|
160
|
Li J, Wang P, Zhou Y, Liang H, Lu Y, Luan K. A novel classification method of lymph node metastasis in colorectal cancer. Bioengineered 2021; 12:2007-2021. [PMID: 34024255 PMCID: PMC8806456 DOI: 10.1080/21655979.2021.1930333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 05/07/2021] [Accepted: 05/08/2021] [Indexed: 11/21/2022] Open
Abstract
Colorectal cancer lymph node metastasis, which is highly associated with the patient's cancer recurrence and survival rate, has been the focus of many therapeutic strategies that are highly associated with the patient's cancer recurrence and survival rate. The popular methods for classification of lymph node metastasis by neural networks, however, show limitations as the available low-level features are inadequate for classification, and the radiologists are unable to quickly review the images. Identifying lymph node metastasis in colorectal cancer is a key factor in the treatment of patients with colorectal cancer. In the present work, an automatic classification method based on deep transfer learning was proposed. Specifically, the method resolved the problem of repetition of low-level features and combined these features with high-level features into a new feature map for classification; and a merged layer which merges all transmitted features from previous layers into a map of the first full connection layer. With a dataset collected from Harbin Medical University Cancer Hospital, the experiment involved a sample of 3,364 patients. Among these samples, 1,646 were positive, and 1,718 were negative. The experiment results showed the sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were 0.8732, 0.8746, 0.8746 and 0.8728, respectively, and the accuracy and AUC were 0.8358 and 0.8569, respectively. These demonstrated that our method significantly outperformed the previous classification methods for colorectal cancer lymph node metastasis without increasing the depth and width of the model.
Collapse
Affiliation(s)
- Jin Li
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, China
| | - Peng Wang
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, China
| | - Yang Zhou
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, China
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang Province, China
| | - Hong Liang
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, China
| | - Yang Lu
- College of Information and Electrical Engineering, Heilongjiang Bayi Agricultural University, Daqing, Heilongjiang Province, China
| | - Kuan Luan
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, China
| |
Collapse
|
161
|
Mutepfe F, Kalejahi BK, Meshgini S, Danishvar S. Generative Adversarial Network Image Synthesis Method for Skin Lesion Generation and Classification. JOURNAL OF MEDICAL SIGNALS & SENSORS 2021; 11:237-252. [PMID: 34820296 PMCID: PMC8588886 DOI: 10.4103/jmss.jmss_53_20] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Revised: 09/29/2020] [Accepted: 01/01/2021] [Indexed: 11/16/2022]
Abstract
Background: One of the common limitations in the treatment of cancer is in the early detection of this disease. The customary medical practice of cancer examination is a visual examination by the dermatologist followed by an invasive biopsy. Nonetheless, this symptomatic approach is timeconsuming and prone to human errors. An automated machine learning model is essential to capacitate fast diagnoses and early treatment. Objective: The key objective of this study is to establish a fully automatic model that helps Dermatologists in skin cancer handling process in a way that could improve skin lesion classification accuracy. Method: The work is conducted following an implementation of a Deep Convolutional Generative Adversarial Network (DCGAN) using the Python-based deep learning library Keras. We incorporated effective image filtering and enhancement algorithms such as bilateral filter to enhance feature detection and extraction during training. The Deep Convolutional Generative Adversarial Network (DCGAN) needed slightly more fine-tuning to ripe a better return. Hyperparameter optimization was utilized for selecting the best-performed hyperparameter combinations and several network hyperparameters. In this work, we decreased the learning rate from the default 0.001 to 0.0002, and the momentum for Adam optimization algorithm from 0.9 to 0.5, in trying to reduce the instability issues related to GAN models and at each iteration the weights of the discriminative and generative network were updated to balance the loss between them. We endeavour to address a binary classification which predicts two classes present in our dataset, namely benign and malignant. More so, some wellknown metrics such as the receiver operating characteristic -area under the curve and confusion matrix were incorporated for evaluating the results and classification accuracy. Results: The model generated very conceivable lesions during the early stages of the experiment and we could easily visualise a smooth transition in resolution along the way. Thus, we have achieved an overall test accuracy of 93.5% after fine-tuning most parameters of our network. Conclusion: This classification model provides spatial intelligence that could be useful in the future for cancer risk prediction. Unfortunately, it is difficult to generate high quality images that are much like the synthetic real samples and to compare different classification methods given the fact that some methods use non-public datasets for training.
Collapse
Affiliation(s)
- Freedom Mutepfe
- Department of Computer Science and Engineering, School of Science and Engineering, Khazar University, Baku, Azerbaijan
| | - Behnam Kiani Kalejahi
- Department of Computer Science and Engineering, School of Science and Engineering, Khazar University, Baku, Azerbaijan.,Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran
| | - Saeed Meshgini
- Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran
| | - Sebelan Danishvar
- Department of Electronic and Computer Engineering, Brunel University, London, UK
| |
Collapse
|
162
|
Li Y, Han G, Liu X. DCNet: Densely Connected Deep Convolutional Encoder-Decoder Network for Nasopharyngeal Carcinoma Segmentation. SENSORS 2021; 21:s21237877. [PMID: 34883878 PMCID: PMC8659888 DOI: 10.3390/s21237877] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 11/22/2021] [Accepted: 11/23/2021] [Indexed: 11/20/2022]
Abstract
Nasopharyngeal Carcinoma segmentation in magnetic resonance imagery (MRI) is vital to radiotherapy. Exact dose delivery hinges on an accurate delineation of the gross tumor volume (GTV). However, the large-scale variation in tumor volume is intractable, and the performance of current models is mostly unsatisfactory with indistinguishable and blurred boundaries of segmentation results of tiny tumor volume. To address the problem, we propose a densely connected deep convolutional network consisting of an encoder network and a corresponding decoder network, which extracts high-level semantic features from different levels and uses low-level spatial features concurrently to obtain fine-grained segmented masks. Skip-connection architecture is involved and modified to propagate spatial information to the decoder network. Preliminary experiments are conducted on 30 patients. Experimental results show our model outperforms all baseline models, with improvements of 4.17%. An ablation study is performed, and the effectiveness of the novel loss function is validated.
Collapse
Affiliation(s)
- Yang Li
- School of Mathematics, Sun Yat-sen University, Guangzhou 510275, China;
| | - Guanghui Han
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China;
- School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450046, China
| | - Xiujian Liu
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China;
- Correspondence:
| |
Collapse
|
163
|
Benyahia S, Meftah B, Lézoray O. Multi-features extraction based on deep learning for skin lesion classification. Tissue Cell 2021; 74:101701. [PMID: 34861582 DOI: 10.1016/j.tice.2021.101701] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 11/22/2021] [Accepted: 11/23/2021] [Indexed: 10/19/2022]
Abstract
For various forms of skin lesion, many different feature extraction methods have been investigated so far. Indeed, feature extraction is a crucial step in machine learning processes. In general, we can distinct handcrafted and deep learning features. In this paper, we investigate the efficiency of using 17 commonly pre-trained convolutional neural networks (CNN) architectures as feature extractors and of 24 machine learning classifiers to evaluate the classification of skin lesions from two different datasets: ISIC 2019 and PH2. In this research, we find out that a DenseNet201 combined with Fine KNN or Cubic SVM achieved the best results in accuracy (92.34% and 91.71%) for the ISIC 2019 dataset. The results also show that the suggested method outperforms others approaches with an accuracy of 99% on the PH2 dataset.
Collapse
Affiliation(s)
- Samia Benyahia
- Department of Computer Science, Faculty of Exact Sciences, University of Mascara, Mascara, Algeria
| | | | - Olivier Lézoray
- Normandie Univ, UNICAEN, ENSICAEN, CNRS, GREYC, Caen, France
| |
Collapse
|
164
|
Takiddin A, Schneider J, Yang Y, Abd-Alrazaq A, Househ M. Artificial Intelligence for Skin Cancer Detection: Scoping Review. J Med Internet Res 2021; 23:e22934. [PMID: 34821566 PMCID: PMC8663507 DOI: 10.2196/22934] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 01/05/2021] [Accepted: 08/03/2021] [Indexed: 01/12/2023] Open
Abstract
BACKGROUND Skin cancer is the most common cancer type affecting humans. Traditional skin cancer diagnosis methods are costly, require a professional physician, and take time. Hence, to aid in diagnosing skin cancer, artificial intelligence (AI) tools are being used, including shallow and deep machine learning-based methodologies that are trained to detect and classify skin cancer using computer algorithms and deep neural networks. OBJECTIVE The aim of this study was to identify and group the different types of AI-based technologies used to detect and classify skin cancer. The study also examined the reliability of the selected papers by studying the correlation between the data set size and the number of diagnostic classes with the performance metrics used to evaluate the models. METHODS We conducted a systematic search for papers using Institute of Electrical and Electronics Engineers (IEEE) Xplore, Association for Computing Machinery Digital Library (ACM DL), and Ovid MEDLINE databases following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines. The studies included in this scoping review had to fulfill several selection criteria: being specifically about skin cancer, detecting or classifying skin cancer, and using AI technologies. Study selection and data extraction were independently conducted by two reviewers. Extracted data were narratively synthesized, where studies were grouped based on the diagnostic AI techniques and their evaluation metrics. RESULTS We retrieved 906 papers from the 3 databases, of which 53 were eligible for this review. Shallow AI-based techniques were used in 14 studies, and deep AI-based techniques were used in 39 studies. The studies used up to 11 evaluation metrics to assess the proposed models, where 39 studies used accuracy as the primary evaluation metric. Overall, studies that used smaller data sets reported higher accuracy. CONCLUSIONS This paper examined multiple AI-based skin cancer detection models. However, a direct comparison between methods was hindered by the varied use of different evaluation metrics and image types. Performance scores were affected by factors such as data set size, number of diagnostic classes, and techniques. Hence, the reliability of shallow and deep models with higher accuracy scores was questionable since they were trained and tested on relatively small data sets of a few diagnostic classes.
Collapse
Affiliation(s)
- Abdulrahman Takiddin
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX, United States
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Jens Schneider
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Yin Yang
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Alaa Abd-Alrazaq
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mowafa Househ
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
165
|
Tang P, Yan X, Nan Y, Xiang S, Krammer S, Lasser T. FusionM4Net: A multi-stage multi-modal learning algorithm for multi-label skin lesion classification. Med Image Anal 2021; 76:102307. [PMID: 34861602 DOI: 10.1016/j.media.2021.102307] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 10/28/2021] [Accepted: 11/15/2021] [Indexed: 11/15/2022]
Abstract
Skin disease is one of the most common diseases in the world. Deep learning-based methods have achieved excellent skin lesion recognition performance, most of which are based on only dermoscopy images. In recent works that use multi-modality data (patient's meta-data, clinical images, and dermoscopy images), the methods adopt a one-stage fusion approach and only optimize the information fusion at the feature level. These methods do not use information fusion at the decision level and thus cannot fully use the data of all modalities. This work proposes a novel two-stage multi-modal learning algorithm (FusionM4Net) for multi-label skin diseases classification. At the first stage, we construct a FusionNet, which exploits and integrates the representation of clinical and dermoscopy images at the feature level, and then uses a Fusion Scheme 1 to conduct the information fusion at the decision level. At the second stage, to further incorporate the patient's meta-data, we propose a Fusion Scheme 2, which integrates the multi-label predictive information from the first stage and patient's meta-data information to train an SVM cluster. The final diagnosis is formed by the fusion of the predictions from the first and second stages. Our algorithm was evaluated on the seven-point checklist dataset, a well-established multi-modality multi-label skin disease dataset. Without using the patient's meta-data, the proposed FusionM4Net's first stage (FusionM4Net-FS) achieved an average accuracy of 75.7% for multi-classification tasks and 74.9% for diagnostic tasks, which is more accurate than other state-of-the-art methods. By further fusing the patient's meta-data at FusionM4Net's second stage (FusionM4Net-SS), the entire FusionM4Net finally boosts the average accuracy to 77.0% and the diagnostic accuracy to 78.5%, which indicates its robust and excellent classification performance on the label-imbalanced dataset. The corresponding code is available at: https://github.com/pixixiaonaogou/MLSDR.
Collapse
Affiliation(s)
- Peng Tang
- Department of Informatics and Munich School of BioEngineering, Technical University of Munich, Munich, Germany.
| | - Xintong Yan
- State Grid Henan Economic Research Institute, Zhengzhou, Henan 450052, China
| | - Yang Nan
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Shao Xiang
- Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Hubei 430079, China
| | - Sebastian Krammer
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany
| | - Tobias Lasser
- Department of Informatics and Munich School of BioEngineering, Technical University of Munich, Munich, Germany
| |
Collapse
|
166
|
Dai D, Dong C, Xu S, Yan Q, Li Z, Zhang C, Luo N. Ms RED: A novel multi-scale residual encoding and decoding network for skin lesion segmentation. Med Image Anal 2021; 75:102293. [PMID: 34800787 DOI: 10.1016/j.media.2021.102293] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 09/01/2021] [Accepted: 10/27/2021] [Indexed: 12/22/2022]
Abstract
Computer-Aided Diagnosis (CAD) for dermatological diseases offers one of the most notable showcases where deep learning technologies display their impressive performance in acquiring and surpassing human experts. In such the CAD process, a critical step is concerned with segmenting skin lesions from dermoscopic images. Despite remarkable successes attained by recent deep learning efforts, much improvement is still anticipated to tackle challenging cases, e.g., segmenting lesions that are irregularly shaped, bearing low contrast, or possessing blurry boundaries. To address such inadequacies, this study proposes a novel Multi-scale Residual Encoding and Decoding network (Ms RED) for skin lesion segmentation, which is able to accurately and reliably segment a variety of lesions with efficiency. Specifically, a multi-scale residual encoding fusion module (MsR-EFM) is employed in an encoder, and a multi-scale residual decoding fusion module (MsR-DFM) is applied in a decoder to fuse multi-scale features adaptively. In addition, to enhance the representation learning capability of the newly proposed pipeline, we propose a novel multi-resolution, multi-channel feature fusion module (M2F2), which replaces conventional convolutional layers in encoder and decoder networks. Furthermore, we introduce a novel pooling module (Soft-pool) to medical image segmentation for the first time, retaining more helpful information when down-sampling and getting better segmentation performance. To validate the effectiveness and advantages of the proposed network, we compare it with several state-of-the-art methods on ISIC 2016, 2017, 2018, and PH2. Experimental results consistently demonstrate that the proposed Ms RED attains significantly superior segmentation performance across five popularly used evaluation criteria. Last but not least, the new model utilizes much fewer model parameters than its peer approaches, leading to a greatly reduced number of labeled samples required for model training, which in turn produces a substantially faster converging training process than its peers. The source code is available at https://github.com/duweidai/Ms-RED.
Collapse
Affiliation(s)
- Duwei Dai
- Institute of Medical Artificial Intelligence, the Second Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, China
| | - Caixia Dong
- Institute of Medical Artificial Intelligence, the Second Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, China
| | - Songhua Xu
- Institute of Medical Artificial Intelligence, the Second Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, China.
| | - Qingsen Yan
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, 5005, Australia
| | - Zongfang Li
- Institute of Medical Artificial Intelligence, the Second Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, China.
| | - Chunyan Zhang
- Institute of Medical Artificial Intelligence, the Second Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, China
| | - Nana Luo
- Affiliated Hospital of Jining Medical University, Jining, 272000, China
| |
Collapse
|
167
|
Bamba Y, Ogawa S, Itabashi M, Kameoka S, Okamoto T, Yamamoto M. Automated recognition of objects and types of forceps in surgical images using deep learning. Sci Rep 2021; 11:22571. [PMID: 34799625 PMCID: PMC8604928 DOI: 10.1038/s41598-021-01911-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 10/26/2021] [Indexed: 12/15/2022] Open
Abstract
Analysis of operative data with convolutional neural networks (CNNs) is expected to improve the knowledge and professional skills of surgeons. Identification of objects in videos recorded during surgery can be used for surgical skill assessment and surgical navigation. The objectives of this study were to recognize objects and types of forceps in surgical videos acquired during colorectal surgeries and evaluate detection accuracy. Images (n = 1818) were extracted from 11 surgical videos for model training, and another 500 images were extracted from 6 additional videos for validation. The following 5 types of forceps were selected for annotation: ultrasonic scalpel, grasping, clip, angled (Maryland and right-angled), and spatula. IBM Visual Insights software was used, which incorporates the most popular open-source deep-learning CNN frameworks. In total, 1039/1062 (97.8%) forceps were correctly identified among 500 test images. Calculated recall and precision values were as follows: grasping forceps, 98.1% and 98.0%; ultrasonic scalpel, 99.4% and 93.9%; clip forceps, 96.2% and 92.7%; angled forceps, 94.9% and 100%; and spatula forceps, 98.1% and 94.5%, respectively. Forceps recognition can be achieved with high accuracy using deep-learning models, providing the opportunity to evaluate how forceps are used in various operations.
Collapse
Affiliation(s)
- Yoshiko Bamba
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan.
| | - Shimpei Ogawa
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan
| | - Michio Itabashi
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan
| | | | - Takahiro Okamoto
- Department of Surgery 2, Tokyo Women's Medical University, Tokyo, Japan
| | - Masakazu Yamamoto
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan
| |
Collapse
|
168
|
An ensemble-based convolutional neural network model powered by a genetic algorithm for melanoma diagnosis. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06655-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractMelanoma is one of the main causes of cancer-related deaths. The development of new computational methods as an important tool for assisting doctors can lead to early diagnosis and effectively reduce mortality. In this work, we propose a convolutional neural network architecture for melanoma diagnosis inspired by ensemble learning and genetic algorithms. The architecture is designed by a genetic algorithm that finds optimal members of the ensemble. Additionally, the abstract features of all models are merged and, as a result, additional prediction capabilities are obtained. The diagnosis is achieved by combining all individual predictions. In this manner, the training process is implicitly regularized, showing better convergence, mitigating the overfitting of the model, and improving the generalization performance. The aim is to find the models that best contribute to the ensemble. The proposed approach also leverages data augmentation, transfer learning, and a segmentation algorithm. The segmentation can be performed without training and with a central processing unit, thus avoiding a significant amount of computational power, while maintaining its competitive performance. To evaluate the proposal, an extensive experimental study was conducted on sixteen skin image datasets, where state-of-the-art models were significantly outperformed. This study corroborated that genetic algorithms can be employed to effectively find suitable architectures for the diagnosis of melanoma, achieving in overall 11% and 13% better prediction performances compared to the closest model in dermoscopic and non-dermoscopic images, respectively. Finally, the proposal was implemented in a web application in order to assist dermatologists and it can be consulted at http://skinensemble.com.
Collapse
|
169
|
Ali S, Li J, Pei Y, Khurram R, Rehman KU, Rasool AB. State-of-the-Art Challenges and Perspectives in Multi-Organ Cancer Diagnosis via Deep Learning-Based Methods. Cancers (Basel) 2021; 13:5546. [PMID: 34771708 PMCID: PMC8583666 DOI: 10.3390/cancers13215546] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 10/28/2021] [Accepted: 10/29/2021] [Indexed: 11/16/2022] Open
Abstract
Thus far, the most common cause of death in the world is cancer. It consists of abnormally expanding areas that are threatening to human survival. Hence, the timely detection of cancer is important to expanding the survival rate of patients. In this survey, we analyze the state-of-the-art approaches for multi-organ cancer detection, segmentation, and classification. This article promptly reviews the present-day works in the breast, brain, lung, and skin cancer domain. Afterwards, we analytically compared the existing approaches to provide insight into the ongoing trends and future challenges. This review also provides an objective description of widely employed imaging techniques, imaging modality, gold standard database, and related literature on each cancer in 2016-2021. The main goal is to systematically examine the cancer diagnosis systems for multi-organs of the human body as mentioned. Our critical survey analysis reveals that greater than 70% of deep learning researchers attain promising results with CNN-based approaches for the early diagnosis of multi-organ cancer. This survey includes the extensive discussion part along with current research challenges, possible solutions, and prospects. This research will endow novice researchers with valuable information to deepen their knowledge and also provide the room to develop new robust computer-aid diagnosis systems, which assist health professionals in bridging the gap between rapid diagnosis and treatment planning for cancer patients.
Collapse
Affiliation(s)
- Saqib Ali
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (S.A.); (J.L.); (K.u.R.)
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (S.A.); (J.L.); (K.u.R.)
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu 965-8580, Japan
| | - Rooha Khurram
- Beijing Key Laboratory for Green Catalysis and Separation, Department of Chemistry and Chemical Engineering, Beijing University of Technology, Beijing 100124, China;
| | - Khalil ur Rehman
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (S.A.); (J.L.); (K.u.R.)
| | - Abdul Basit Rasool
- Research Institute for Microwave and Millimeter-Wave (RIMMS), National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan;
| |
Collapse
|
170
|
Skin Lesion Extraction Using Multiscale Morphological Local Variance Reconstruction Based Watershed Transform and Fast Fuzzy C-Means Clustering. Symmetry (Basel) 2021. [DOI: 10.3390/sym13112085] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Early identification of melanocytic skin lesions increases the survival rate for skin cancer patients. Automated melanocytic skin lesion extraction from dermoscopic images using the computer vision approach is a challenging task as the lesions present in the image can be of different colors, there may be a variation of contrast near the lesion boundaries, lesions may have different sizes and shapes, etc. Therefore, lesion extraction from dermoscopic images is a fundamental step for automated melanoma identification. In this article, a watershed transform based on the fast fuzzy c-means (FCM) clustering algorithm is proposed for the extraction of melanocytic skin lesion from dermoscopic images. Initially, the proposed method removes the artifacts from the dermoscopic images and enhances the texture regions. Further, it is filtered using a Gaussian filter and a local variance filter to enhance the lesion boundary regions. Later, the watershed transform based on MMLVR (multiscale morphological local variance reconstruction) is introduced to acquire the superpixels of the image with accurate boundary regions. Finally, the fast FCM clustering technique is implemented in the superpixels of the image to attain the final lesion extraction result. The proposed method is tested in the three publicly available skin lesion image datasets, i.e., ISIC 2016, ISIC 2017 and ISIC 2018. Experimental evaluation shows that the proposed method achieves a good result.
Collapse
|
171
|
Wang X, Huang W, Lu Z, Huang S. Multi-level Attentive Skin Lesion Learning for Melanoma Classification. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3924-3927. [PMID: 34892090 DOI: 10.1109/embc46164.2021.9629858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Melanoma classification plays an important role in skin lesion diagnosis. Nevertheless, melanoma classification is a challenging task, due to the appearance variation of the skin lesions, and the interference of the noises from dermoscopic imaging. In this paper, we propose a multi-level attentive skin lesion learning (MASLL) network to enhance melanoma classification. Specifically, we design a local learning branch with a skin lesion localization (SLL) module to assist the network in learning the lesion features from the region of interest. In addition, we propose a weighted feature integration (WFI) module to fuse the lesion information from the global and local branches, which further enhances the feature discriminative capability of the skin lesions. Experimental results on ISIC 2017 dataset show the effectiveness of the proposed method on melanoma classification.
Collapse
|
172
|
Ding S, Wu Z, Zheng Y, Liu Z, Yang X, Yang X, Yuan G, Xie J. Deep attention branch networks for skin lesion classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 212:106447. [PMID: 34678529 DOI: 10.1016/j.cmpb.2021.106447] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2021] [Accepted: 09/28/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE The skin lesion usually covers a small region of the dermoscopy image, and the lesions of different categories might own high similarities. Therefore, it is essential to design an elaborate network for accurate skin lesion classification, which can focus on semantically meaningful lesion parts. Although the Class Activation Mapping (CAM) shows good localization capability of highlighting the discriminative parts, it cannot be obtained in the forward propagation process. METHODS We propose a Deep Attention Branch Network (DABN) model, which introduces the attention branches to expand the conventional Deep Convolutional Neural Networks (DCNN). The attention branch is designed to obtain the CAM in the training stage, which is then utilized as an attention map to make the network focus on discriminative parts of skin lesions. DABN is applicable to multiple DCNN structures and can be trained in an end-to-end manner. Moreover, a novel Entropy-guided Loss Weighting (ELW) strategy is designed to counter class imbalance influence in the skin lesion datasets. RESULTS The proposed method achieves an Average Precision (AP) of 0.719 on the ISIC-2016 dataset and an average area under the ROC curve (AUC) of 0.922 on the ISIC-2017 dataset. Compared with other state-of-the-art methods, our method obtains better performance without external data and ensemble learning. Moreover, extensive experiments demonstrate that it can be applied to multi-class classification tasks and improves mean sensitivity by more than 2.6% in different DCNN structures. CONCLUSIONS The proposed method can adaptively focus on the discriminative regions of dermoscopy images and allows for effective training when facing class imbalance, leading to the performance improvement of skin lesion classification, which could also be applied to other clinical applications.
Collapse
Affiliation(s)
- Saisai Ding
- School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China
| | - Zhongyi Wu
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Yanyan Zheng
- The Wenzhou Third Clinical Institute Affiliated To Wenzhou Medical University, Wenzhou, 325000, China; Wenzhou People's Hospital, Wenzhou, 325000, China
| | - Zhaobang Liu
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Xiaodong Yang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Xiaokai Yang
- The Wenzhou Third Clinical Institute Affiliated To Wenzhou Medical University, Wenzhou, 325000, China; Wenzhou People's Hospital, Wenzhou, 325000, China
| | - Gang Yuan
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| | - Jing Xie
- The Wenzhou Third Clinical Institute Affiliated To Wenzhou Medical University, Wenzhou, 325000, China; Wenzhou People's Hospital, Wenzhou, 325000, China.
| |
Collapse
|
173
|
|
174
|
Guergueb T, Akhloufi MA. Melanoma Skin Cancer Detection Using Recent Deep Learning Models . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3074-3077. [PMID: 34891892 DOI: 10.1109/embc46164.2021.9631047] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Melanoma is considered as one of the world's deadly cancers. This type of skin cancer will spread to other areas of the body if not detected at an early stage. Convolutional Neural Network (CNN) based classifiers are currently considered one of the most effective melanoma detection techniques. This study presents the use of recent deep CNN approaches to detect melanoma skin cancer and investigate suspicious lesions. Tests were conducted using a set of more than 36,000 images extracted from multiple datasets. The obtained results show that the best performing deep learning approach achieves high scores with an accuracy and Area Under Curve (AUC) above 99%.
Collapse
|
175
|
Kaur R, Hosseini HG, Sinha R. Lesion Border Detection of Skin Cancer Images Using Deep Fully Convolutional Neural Network with Customized Weights. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3035-3038. [PMID: 34891883 DOI: 10.1109/embc46164.2021.9630512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep learning techniques have been widely employed in semantic segmentation problems, especially in medical image analysis, for understanding image patterns. Skin cancer is a life-threatening problem, whereas timely detection can prevent and reduce the mortality rate. The aim is to segment the lesion area from the skin cancer image to help experts in the process of deeply understanding tissues and cancer cells' formation. Thus, we proposed an improved fully convolutional neural network (FCNN) architecture for lesion segmentation in dermoscopic skin cancer images. The FCNN network consists of multiple feature extraction layers forming a deep framework to obtain a larger vision for generating pixel labels. The novelty of the network lies in the way layers are stacked and the generation of customized weights in each convolutional layer to produce a full resolution feature map. The proposed model was compared with the top four winners of the International Skin Imaging Collaboration (ISIC) challenge using evaluation metrics such as accuracy, Jaccard index, and dice co-efficient. It outperformed the given state-of-the-art methods with higher values of the accuracy and Jaccard index.
Collapse
|
176
|
Liu C, Qiao M, Jiang F, Guo Y, Jin Z, Wang Y. TN-USMA Net: Triple normalization-based gastrointestinal stromal tumors classification on multicenter EUS images with ultrasound-specific pretraining and meta attention. Med Phys 2021; 48:7199-7214. [PMID: 34412155 DOI: 10.1002/mp.15172] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 07/11/2021] [Accepted: 07/31/2021] [Indexed: 12/16/2022] Open
Abstract
PURPOSE Accurate quantification of gastrointestinal stromal tumors' (GISTs) risk stratification on multicenter endoscopic ultrasound (EUS) images plays a pivotal role in aiding the surgical decision-making process. This study focuses on automatically classifying higher-risk and lower-risk GISTs in the presence of a multicenter setting and limited data. METHODS In this study, we retrospectively enrolled 914 patients with GISTs (1824 EUS images in total) from 18 hospitals in China. We propose a triple normalization-based deep learning framework with ultrasound-specific pretraining and meta attention, namely, TN-USMA model. The triple normalization module consists of the intensity normalization, size normalization, and spatial resolution normalization. First, the image intensity is standardized and same-size regions of interest (ROIs) and same-resolution tumor masks are generated in parallel. Then, the transfer learning strategy is utilized to mitigate the data scarcity problem. The same-size ROIs are fed into a deep architecture with ultrasound-specific pretrained weights, which are obtained from self-supervised learning using a large volume of unlabeled ultrasound images. Meanwhile, tumors' size features are calculated from the same-resolution masks individually. Afterward, the size features together with two demographic features are integrated to the model before the final classification layer using a meta attention mechanism to further enhance feature representations. The diagnostic performance of the proposed method was compared with one radiomics-based method and two state-of-the-art deep learning methods. Four evaluation metrics, namely, the accuracy, the area under the receiver operator curve, the sensitivity, and the specificity were used to evaluate the model performance. RESULTS The proposed TN-USMA model achieves an overall accuracy of 0.834 (95% confidence interval [CI]: 0.772, 0.885), an area under the receiver operator curve of 0.881 (95% CI: 0.825, 0.924), a sensitivity of 0.844 (95% CI: 0.672, 0.947), and a specificity of 0.832 (95% CI: 0.762, 0.888). The AUC significantly outperforms other two deep learning approaches (p < 0.05, DeLong et al). Moreover, the performance is stable under different variations of multicenter dataset partitions. CONCLUSIONS The proposed TN-USMA model can successfully differentiate higher-risk GISTs from lower-risk ones. It is accurate, robust, generalizable, and efficient for potential clinical applications.
Collapse
Affiliation(s)
- Chengcheng Liu
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Mengyun Qiao
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Fei Jiang
- Department of Gastroenterology, Changhai Hospital, Shanghai, China
| | - Yi Guo
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Zhendong Jin
- Department of Gastroenterology, Changhai Hospital, Shanghai, China
| | - Yuanyuan Wang
- Department of Electronic Engineering, Fudan University, Shanghai, China
| |
Collapse
|
177
|
Pereira PMM, Thomaz LA, Tavora LMN, Assuncao PAA, Fonseca-Pinto RM, Paiva RP, Faria SMMD. Melanoma classification using light-Fields with morlet scattering transform and CNN: Surface depth as a valuable tool to increase detection rate. Med Image Anal 2021; 75:102254. [PMID: 34649195 DOI: 10.1016/j.media.2021.102254] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 07/27/2021] [Accepted: 09/22/2021] [Indexed: 11/15/2022]
Abstract
Medical image classification through learning-based approaches has been increasingly used, namely in the discrimination of melanoma. However, for skin lesion classification in general, such methods commonly rely on dermoscopic or other 2D-macro RGB images. This work proposes to exploit beyond conventional 2D image characteristics, by considering a third dimension (depth) that characterises the skin surface rugosity, which can be obtained from light-field images, such as those available in the SKINL2 dataset. To achieve this goal, a processing pipeline was deployed using a morlet scattering transform and a CNN model, allowing to perform a comparison between using 2D information, only 3D information, or both. Results show that discrimination between Melanoma and Nevus reaches an accuracy of 84.00, 74.00 or 94.00% when using only 2D, only 3D, or both, respectively. An increase of 14.29pp in sensitivity and 8.33pp in specificity is achieved when expanding beyond conventional 2D information by also using depth. When discriminating between Melanoma and all other types of lesions (a further imbalanced setting), an increase of 28.57pp in sensitivity and decrease of 1.19pp in specificity is achieved for the same test conditions. Overall the results of this work demonstrate significant improvements over conventional approaches.
Collapse
Affiliation(s)
- Pedro M M Pereira
- Instituto de Telecomunicações, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal; University of Coimbra, Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, Pinhal de Marrocos, Coimbra 3030-290, Portugal.
| | - Lucas A Thomaz
- Instituto de Telecomunicações, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal; ESTG, Polytechnic of Leiria, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal
| | - Luis M N Tavora
- ESTG, Polytechnic of Leiria, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal
| | - Pedro A A Assuncao
- Instituto de Telecomunicações, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal; ESTG, Polytechnic of Leiria, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal
| | - Rui M Fonseca-Pinto
- Instituto de Telecomunicações, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal; ESTG, Polytechnic of Leiria, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal
| | - Rui Pedro Paiva
- University of Coimbra, Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, Pinhal de Marrocos, Coimbra 3030-290, Portugal
| | - Sergio M M de Faria
- Instituto de Telecomunicações, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal; ESTG, Polytechnic of Leiria, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal
| |
Collapse
|
178
|
Peng X, Xu B, Xu Z, Yan X, Zhang N, Qin Y, Ma Q, Li J, Zhao N, Zhang Q. Accuracy improvement in plastics classification by laser-induced breakdown spectroscopy based on a residual network. OPTICS EXPRESS 2021; 29:33269-33280. [PMID: 34809142 DOI: 10.1364/oe.438331] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Accepted: 09/20/2021] [Indexed: 06/13/2023]
Abstract
The whole ecosystem is suffering from serious plastic pollution. Automatic and accurate classification is an essential process in plastic effective recycle. In this work, we proposed an accurate approach for plastics classification using a residual network based on laser-induced breakdown spectroscopy (LIBS). To increasing efficiency, the LIBS spectral data were compressed by peak searching algorithm based on continuous wavelet, then were transformed to characteristic images for training and validation of the residual network. Acrylonitrile butadiene styrene (ABS), polyamide (PA), polymethyl methacrylate (PMMA), and polyvinyl chloride (PVC) from 13 manufacturers were used. The accuracy of the proposed method in few-shot learning was evaluated. The results show that when the number of training image data was 1, the verification accuracy of classification by plastic type under residual network still kept 100%, which was much higher than conventional classification algorithms (BP, kNN and SVM). Furthermore, the training and testing data were separated from different manufacturers to evaluate the anti-interference properties of the proposed method from various additives in plastics, where 73.34% accuracy was obtained. To demonstrate the superiority of classification accuracy in the proposed method, all the evaluations were also implemented by using conventional classification algorithm (kNN, BP, SVM algorithm). The results confirmed that the residual network has a significantly higher accuracy in plastics classification and shows great potential in plastic recycle industries for pollution mitigation.
Collapse
|
179
|
Zhong L, Meng Q, Chen Y, Du L, Wu P. A laminar augmented cascading flexible neural forest model for classification of cancer subtypes based on gene expression data. BMC Bioinformatics 2021; 22:475. [PMID: 34600466 PMCID: PMC8487515 DOI: 10.1186/s12859-021-04391-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Accepted: 09/22/2021] [Indexed: 01/22/2023] Open
Abstract
BACKGROUND Correctly classifying the subtypes of cancer is of great significance for the in-depth study of cancer pathogenesis and the realization of personalized treatment for cancer patients. In recent years, classification of cancer subtypes using deep neural networks and gene expression data has gradually become a research hotspot. However, most classifiers may face overfitting and low classification accuracy when dealing with small sample size and high-dimensional biology data. RESULTS In this paper, a laminar augmented cascading flexible neural forest (LACFNForest) model was proposed to complete the classification of cancer subtypes. This model is a cascading flexible neural forest using deep flexible neural forest (DFNForest) as the base classifier. A hierarchical broadening ensemble method was proposed, which ensures the robustness of classification results and avoids the waste of model structure and function as much as possible. We also introduced an output judgment mechanism to each layer of the forest to reduce the computational complexity of the model. The deep neural forest was extended to the densely connected deep neural forest to improve the prediction results. The experiments on RNA-seq gene expression data showed that LACFNForest has better performance in the classification of cancer subtypes compared to the conventional methods. CONCLUSION The LACFNForest model effectively improves the accuracy of cancer subtype classification with good robustness. It provides a new approach for the ensemble learning of classifiers in terms of structural design.
Collapse
Affiliation(s)
- Lianxin Zhong
- School of Information Science and Engineering, University of Jinan, Jinan, China
- Shandong Provincial Key laboratory of Network Based Intelligent Computing, Jinan, 250022, China
| | - Qingfang Meng
- School of Information Science and Engineering, University of Jinan, Jinan, China.
- Shandong Provincial Key laboratory of Network Based Intelligent Computing, Jinan, 250022, China.
| | - Yuehui Chen
- School of Information Science and Engineering, University of Jinan, Jinan, China
- Shandong Provincial Key laboratory of Network Based Intelligent Computing, Jinan, 250022, China
| | - Lei Du
- School of Information Science and Engineering, University of Jinan, Jinan, China
- Shandong Provincial Key laboratory of Network Based Intelligent Computing, Jinan, 250022, China
| | - Peng Wu
- School of Information Science and Engineering, University of Jinan, Jinan, China
- Shandong Provincial Key laboratory of Network Based Intelligent Computing, Jinan, 250022, China
| |
Collapse
|
180
|
Chi J, Han X, Wu C, Wang H, Ji P. X-Net: Multi-branch UNet-like network for liver and tumor segmentation from 3D abdominal CT scans. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.06.021] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
181
|
Nawaz M, Mehmood Z, Nazir T, Naqvi RA, Rehman A, Iqbal M, Saba T. Skin cancer detection from dermoscopic images using deep learning and fuzzy k-means clustering. Microsc Res Tech 2021; 85:339-351. [PMID: 34448519 DOI: 10.1002/jemt.23908] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2021] [Revised: 07/09/2021] [Accepted: 07/25/2021] [Indexed: 11/09/2022]
Abstract
Melanoma skin cancer is the most life-threatening and fatal disease among the family of skin cancer diseases. Modern technological developments and research methodologies made it possible to detect and identify this kind of skin cancer more effectively; however, the automated localization and segmentation of skin lesion at earlier stages is still a challenging task due to the low contrast between melanoma moles and skin portion and a higher level of color similarity between melanoma-affected and -nonaffected areas. In this paper, we present a fully automated method for segmenting the skin melanoma at its earliest stage by employing a deep-learning-based approach, namely faster region-based convolutional neural networks (RCNN) along with fuzzy k-means clustering (FKM). Several clinical images are utilized to test the presented method so that it may help the dermatologist in diagnosing this life-threatening disease at its earliest stage. The presented method first preprocesses the dataset images to remove the noise and illumination problems and enhance the visual information before applying the faster-RCNN to obtain the feature vector of fixed length. After that, FKM has been employed to segment the melanoma-affected portion of skin with variable size and boundaries. The performance of the presented method is evaluated on the three standard datasets, namely ISBI-2016, ISIC-2017, and PH2, and the results show that the presented method outperforms the state-of-the-art approaches. The presented method attains an average accuracy of 95.40, 93.1, and 95.6% on the ISIC-2016, ISIC-2017, and PH2 datasets, respectively, which is showing its robustness to skin lesion recognition and segmentation.
Collapse
Affiliation(s)
- Marriam Nawaz
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| | - Zahid Mehmood
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Tahira Nazir
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul, South Korea
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics (AIDA) Lab, CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Munwar Iqbal
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics (AIDA) Lab, CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
182
|
A New Method for Syndrome Classification of Non-Small-Cell Lung Cancer Based on Data of Tongue and Pulse with Machine Learning. BIOMED RESEARCH INTERNATIONAL 2021; 2021:1337558. [PMID: 34423031 PMCID: PMC8373490 DOI: 10.1155/2021/1337558] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 07/12/2021] [Accepted: 07/23/2021] [Indexed: 12/18/2022]
Abstract
Objective To explore the data characteristics of tongue and pulse of non-small-cell lung cancer with Qi deficiency syndrome and Yin deficiency syndrome, establish syndrome classification model based on data of tongue and pulse by using machine learning methods, and evaluate the feasibility of syndrome classification based on data of tongue and pulse. Methods We collected tongue and pulse of non-small-cell lung cancer patients with Qi deficiency syndrome (n = 163), patients with Yin deficiency syndrome (n = 174), and healthy controls (n = 185) using intelligent tongue diagnosis analysis instrument and pulse diagnosis analysis instrument, respectively. We described the characteristics and examined the correlation of data of tongue and pulse. Four machine learning methods, namely, random forest, logistic regression, support vector machine, and neural network, were used to establish the classification models based on symptom, tongue and pulse, and symptom and tongue and pulse, respectively. Results Significant difference indices of tongue diagnosis between Qi deficiency syndrome and Yin deficiency syndrome were TB-a, TB-S, TB-Cr, TC-a, TC-S, TC-Cr, perAll, and the tongue coating texture indices including TC-CON, TC-ASM, TC-MEAN, and TC-ENT. Significant difference indices of pulse diagnosis were t4 and t5. The classification performance of each model based on different datasets was as follows: tongue and pulse < symptom < symptom and tongue and pulse. The neural network model had a better classification performance for symptom and tongue and pulse datasets, with an area under the ROC curves and accuracy rate which were 0.9401 and 0.8806. Conclusions It was feasible to use tongue data and pulse data as one of the objective diagnostic basis in Qi deficiency syndrome and Yin deficiency syndrome of non-small-cell lung cancer.
Collapse
|
183
|
Hybrid Transfer Learning for Classification of Uterine Cervix Images for Cervical Cancer Screening. J Digit Imaging 2021; 33:619-631. [PMID: 31848896 DOI: 10.1007/s10278-019-00269-1] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
Transfer learning using deep pre-trained convolutional neural networks is increasingly used to solve a large number of problems in the medical field. In spite of being trained using images with entirely different domain, these networks are flexible to adapt to solve a problem in a different domain too. Transfer learning involves fine-tuning a pre-trained network with optimal values of hyperparameters such as learning rate, batch size, and number of training epochs. The process of training the network identifies the relevant features for solving a specific problem. Adapting the pre-trained network to solve a different problem requires fine-tuning until relevant features are obtained. This is facilitated through the use of large number of filters present in the convolutional layers of pre-trained network. A very few features out of these features are useful for solving the problem in a different domain, while others are irrelevant, use of which may only reduce the efficacy of the network. However, by minimizing the number of filters required to solve the problem, the efficiency of the training the network can be improved. In this study, we consider identification of relevant filters using the pre-trained networks namely AlexNet and VGG-16 net to detect cervical cancer from cervix images. This paper presents a novel hybrid transfer learning technique, in which a CNN is built and trained from scratch, with initial weights of only those filters which were identified as relevant using AlexNet and VGG-16 net. This study used 2198 cervix images with 1090 belonging to negative class and 1108 to positive class. Our experiment using hybrid transfer learning achieved an accuracy of 91.46%.
Collapse
|
184
|
Ain QU, Al-Sahaf H, Xue B, Zhang M. Generating Knowledge-Guided Discriminative Features Using Genetic Programming for Melanoma Detection. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2021. [DOI: 10.1109/tetci.2020.2983426] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
185
|
Kassem MA, Hosny KM, Damaševičius R, Eltoukhy MM. Machine Learning and Deep Learning Methods for Skin Lesion Classification and Diagnosis: A Systematic Review. Diagnostics (Basel) 2021; 11:1390. [PMID: 34441324 PMCID: PMC8391467 DOI: 10.3390/diagnostics11081390] [Citation(s) in RCA: 69] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/04/2022] Open
Abstract
Computer-aided systems for skin lesion diagnosis is a growing area of research. Recently, researchers have shown an increasing interest in developing computer-aided diagnosis systems. This paper aims to review, synthesize and evaluate the quality of evidence for the diagnostic accuracy of computer-aided systems. This study discusses the papers published in the last five years in ScienceDirect, IEEE, and SpringerLink databases. It includes 53 articles using traditional machine learning methods and 49 articles using deep learning methods. The studies are compared based on their contributions, the methods used and the achieved results. The work identified the main challenges of evaluating skin lesion segmentation and classification methods such as small datasets, ad hoc image selection and racial bias.
Collapse
Affiliation(s)
- Mohamed A. Kassem
- Department of Robotics and Intelligent Machines, Faculty of Artificial Intelligence, Kaferelshiekh University, Kaferelshiekh 33511, Egypt;
| | - Khalid M. Hosny
- Department of Information Technology, Faculty of Computers and Informatics, Zagazig University, Zagazig 44519, Egypt
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, Lithuania
| | - Mohamed Meselhy Eltoukhy
- Computer Science Department, Faculty of Computers and Informatics, Suez Canal University, Ismailia 41522, Egypt;
| |
Collapse
|
186
|
Dong Y, Wang L, Cheng S, Li Y. FAC-Net: Feedback Attention Network Based on Context Encoder Network for Skin Lesion Segmentation. SENSORS 2021; 21:s21155172. [PMID: 34372409 PMCID: PMC8347551 DOI: 10.3390/s21155172] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 07/27/2021] [Accepted: 07/27/2021] [Indexed: 11/25/2022]
Abstract
Considerable research and surveys indicate that skin lesions are an early symptom of skin cancer. Segmentation of skin lesions is still a hot research topic. Dermatological datasets in skin lesion segmentation tasks generated a large number of parameters when data augmented, limiting the application of smart assisted medicine in real life. Hence, this paper proposes an effective feedback attention network (FAC-Net). The network is equipped with the feedback fusion block (FFB) and the attention mechanism block (AMB), through the combination of these two modules, we can obtain richer and more specific feature mapping without data enhancement. Numerous experimental tests were given by us on public datasets (ISIC2018, ISBI2017, ISBI2016), and a good deal of metrics like the Jaccard index (JA) and Dice coefficient (DC) were used to evaluate the results of segmentation. On the ISIC2018 dataset, we obtained results for DC equal to 91.19% and JA equal to 83.99%, compared with the based network. The results of these two main metrics were improved by more than 1%. In addition, the metrics were also improved in the other two datasets. It can be demonstrated through experiments that without any enhancements of the datasets, our lightweight model can achieve better segmentation performance than most deep learning architectures.
Collapse
|
187
|
Zhao C, Shuai R, Ma L, Liu W, Wu M. Segmentation of dermoscopy images based on deformable 3D convolution and ResU-NeXt +. Med Biol Eng Comput 2021; 59:1815-1832. [PMID: 34304370 DOI: 10.1007/s11517-021-02397-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 06/16/2021] [Indexed: 11/25/2022]
Abstract
Melanoma is one of the most dangerous skin cancers. The current melanoma segmentation is mainly based on FCNs (fully connected networks) and U-Net. Nevertheless, these two kinds of neural networks are prone to parameter redundancy, and the gradient of neural networks disappears that occurs when the neural network backpropagates as the neural network gets deeper, which will reduce the Jaccard index of the skin lesion image segmentation model. To solve the above problems and improve the survival rate of melanoma patients, an improved skin lesion segmentation model based on deformable 3D convolution and ResU-NeXt++ (D3DC- ResU-NeXt++) is proposed in this paper. The new modules in D3DC-ResU-NeXt++ can replace ordinary modules in the existing 2D convolutional neural networks (CNNs) that can be trained efficiently through standard backpropagation with high segmentation accuracy. In particular, we introduce a new data preprocessing method with dilation, crop operation, resizing, and hair removal (DCRH), which improves the Jaccard index of skin lesion image segmentation. Because rectified Adam (RAdam) does not easily fall into a local optimal solution and can converge quickly in segmentation model training, we also introduce RAdam as the training optimizer. The experiments show that our model has excellent performance on the segmentation of the ISIC2018 Task I dataset, and the Jaccard index achieves 86.84%. The proposed method improves the Jaccard index of segmentation of skin lesion images and can also assist dermatological doctors in determining and diagnosing the types of skin lesions and the boundary between lesions and normal skin, so as to improve the survival rate of skin cancer patients. Overview of the proposed model. An improved skin lesion segmentation model based on deformable 3D convolution and ResU-NeXt++ (D3DC- ResU-NeXt++) is proposed in this paper. D3DC-ResU-NeXt++ has strong spatial geometry processing capabilities, it is used to segment the skin lesion sample image; DCRH and transfer learning are used to preprocess the data set and D3DC-ResU-NeXt++ respectively, which can highlight the difference between the lesion area and the normal skin, and enhance the segmentation efficiency and robustness of the neural network; RAdam is used to speed up the convergence speed of neural network and improve the efficiency of segmentation.
Collapse
Affiliation(s)
- Chen Zhao
- College of Computer Science and Technology, Nanjing Tech University, Nanjing, 211816, China
| | - Renjun Shuai
- College of Computer Science and Technology, Nanjing Tech University, Nanjing, 211816, China.
| | - Li Ma
- Nanjing Health Information Center, Nanjing, 210003, China
| | - Wenjia Liu
- Changzhou No. 2 People's Hospital affiliated with Nanjing Medical University, Changzhou, 213003, China
| | - Menglin Wu
- College of Computer Science and Technology, Nanjing Tech University, Nanjing, 211816, China
| |
Collapse
|
188
|
Abstract
PURPOSE OF REVIEW Artificial intelligence has become popular in medical applications, specifically as a clinical support tool for computer-aided diagnosis. These tools are typically employed on medical data (i.e., image, molecular data, clinical variables, etc.) and used the statistical and machine-learning methods to measure the model performance. In this review, we summarized and discussed the most recent radiomic pipeline used for clinical analysis. RECENT FINDINGS Currently, limited management of cancers benefits from artificial intelligence, mostly related to a computer-aided diagnosis that avoids a biopsy analysis that presents additional risks and costs. Most artificial intelligence tools are based on imaging features, known as radiomic analysis that can be refined into predictive models in noninvasively acquired imaging data. This review explores the progress of artificial intelligence-based radiomic tools for clinical applications with a brief description of necessary technical steps. Explaining new radiomic approaches based on deep-learning techniques will explain how the new radiomic models (deep radiomic analysis) can benefit from deep convolutional neural networks and be applied on limited data sets. SUMMARY To consider the radiomic algorithms, further investigations are recommended to involve deep learning in radiomic models with additional validation steps on various cancer types.
Collapse
Affiliation(s)
- Ahmad Chaddad
- School of Artificial Intelligence, Guilin University of Electronic Technology, Guilin, China
| | - Yousef Katib
- Department of Radiology, Taibah University, Al-Madinah, Saudi Arabia
| | - Lama Hassan
- School of Artificial Intelligence, Guilin University of Electronic Technology, Guilin, China
| |
Collapse
|
189
|
Raj R, Londhe ND, Sonawane R. Automated psoriasis lesion segmentation from unconstrained environment using residual U-Net with transfer learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 206:106123. [PMID: 33975181 DOI: 10.1016/j.cmpb.2021.106123] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 04/18/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE The automatic segmentation of psoriasis lesions from digital images is a challenging task due to the unconstrained imaging environment and non-uniform background. Existing conventional or machine learning-based image processing methods for automatic psoriasis lesion segmentation have several limitations, such as dependency on manual features, human intervention, less and unreliable performance with an increase in data, manual pre-processing steps for removal of background or other artifacts, etc. METHODS: In this paper, we propose a fully automatic approach based on a deep learning model using the transfer learning paradigm for the segmentation of psoriasis lesions from the digital images of different body regions of the psoriasis patients. The proposed model is based on U-Net architecture whose encoder path utilizes a pre-trained residual network model as a backbone. The proposed model is retrained with a self-prepared psoriasis dataset and corresponding segmentation annotation of the lesion. RESULTS The performance of the proposed method is evaluated using a five-fold cross-validation technique. The proposed method achieves an average Dice Similarity Index of 0.948 and Jaccard Index of 0.901 for the intended task. The transfer learning provides an improvement in the segmentation performance of about 4.4% and 7.6% in Dice Similarity Index and Jaccard Index metric respectively, as compared to the training of the proposed model from scratch. CONCLUSIONS An extensive comparative analysis with the state-of-the-art segmentation models and existing literature validates the promising performance of the proposed framework. Hence, our proposed method will provide a basis for an objective area assessment of psoriasis lesions.
Collapse
Affiliation(s)
- Ritesh Raj
- Electrical Engineering Department, National Institute of Technology Raipur, Raipur, Chhattisgarh, 492010, India
| | - Narendra D Londhe
- Electrical Engineering Department, National Institute of Technology Raipur, Raipur, Chhattisgarh, 492010, India.
| | - Rajendra Sonawane
- Psoriasis Clinic and Research Centre, Psoriatreat, Pune, Maharashtra, 411004, India
| |
Collapse
|
190
|
|
191
|
|
192
|
Cheong KH, Tang KJW, Zhao X, Koh JEW, Faust O, Gururajan R, Ciaccio EJ, Rajinikanth V, Acharya UR. An automated skin melanoma detection system with melanoma-index based on entropy features. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.05.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
193
|
Peng C, Jie-Xin L. The incidence and risk of cutaneous toxicities associated with dabrafenib in melanoma patients: a systematic review and meta-analysis. Eur J Hosp Pharm 2021; 28:182-189. [PMID: 32883694 PMCID: PMC8239268 DOI: 10.1136/ejhpharm-2020-002347] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2020] [Revised: 06/20/2020] [Accepted: 06/30/2020] [Indexed: 12/31/2022] Open
Abstract
OBJECTIVE Dabrafenib, an inhibitor of mutated BRAF, has significant clinical activity in melanoma patients but is linked to a spectrum of cutaneous toxicities. Thus, our meta-analysis was conducted to evaluate the type, incidence and risks of dermatological toxicities from dabrafenib. METHODS Systematic searches were performed using electronic databases such as Embase and PubMed and conference abstracts published by the American Society of Clinical Oncology. Eligible studies were limited to prospective phase I, II and III clinical trials and expanded-access (ie, outside clinical trials) programmes of melanoma patients receiving dabrafenib monotherapy (150 mg, twice daily) or combination therapy of dabrafenib (150 mg, twice daily) plus trametinib (2 mg, once daily). The outcomes were mainly the incidence rate and risk of all-grade cutaneous toxicities associated with dabrafenib in melanoma patients. RESULTS Twenty trials comprising a total of 3359 patients were included in the meta-analysis. The meta-analysis showed that the overall incidence of all-grade rash for melanoma patients assigned dabrafenib was 30.00% (95% CI 0.07 to 0.71), cutaneous squamous-cell carcinoma (cSCC) 16.00% (95% CI 0.11 to 0.24), alopecia 21% (95% CI 0.11 to 0.37), keratoacanthoma (KA) 20.00% (95% CI 0.12 to 0.31), hyperkeratosis (HK) 14.00% (95% CI 0.09 to 0.22) and pruritus 8.00% (95% CI 0.05 to 0.12). All-grade rash occurred in 19.00% (95% CI 0.15 to 0.25), cSCC in 10.00% (95% CI 0.04 to 0.22), alopecia in 6.00% (95% CI 0.03 to 0.12), KA in 6.00% (95% CI 0.04 to 0.09) and pruritus in 2/1265 patients assigned dabrafenib plus trametinib. The summary risk ratio (RR) showed that the combination of dabrafenib with trametinib versus dabrafenib was associated with a significantly increased risk of all-grade rash (RR 1.35, 95% CI 1.01 to 1.80) and a decreased risk of cSCC (RR 0.40, 95% CI 0.18 to 0.89), alopecia (RR 0.19, 95% CI 0.12 to 0.30) and HK (RR 0.25, 95% CI 0.10 to 0.62). CONCLUSION In summary, the most frequent cutaneous adverse reactions from dabrafenib were rash, cSCC, alopecia, KA, HK and pruritus. There was a significantly decreased risk of cSCC, alopecia and HK with the combination of dabrafenib with trametinib versus dabrafenib alone. Clinicians should be aware of these risks and perform regular clinical monitoring.
Collapse
Affiliation(s)
- Chen Peng
- Department of Pharmacy, Renmin Hospital of Wuhan University, Wuhan University, Wuhan, China
| | | |
Collapse
|
194
|
Baig R, Bibi M, Hamid A, Kausar S, Khalid S. Deep Learning Approaches Towards Skin Lesion Segmentation and Classification from Dermoscopic Images - A Review. Curr Med Imaging 2021; 16:513-533. [PMID: 32484086 DOI: 10.2174/1573405615666190129120449] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Revised: 12/17/2018] [Accepted: 01/02/2019] [Indexed: 02/08/2023]
Abstract
BACKGROUND Automated intelligent systems for unbiased diagnosis are primary requirement for the pigment lesion analysis. It has gained the attention of researchers in the last few decades. These systems involve multiple phases such as pre-processing, feature extraction, segmentation, classification and post processing. It is crucial to accurately localize and segment the skin lesion. It is observed that recent enhancements in machine learning algorithms and dermoscopic techniques reduced the misclassification rate therefore, the focus towards computer aided systems increased exponentially in recent years. Computer aided diagnostic systems are reliable source for dermatologists to analyze the type of cancer, but it is widely acknowledged that even higher accuracy is needed for computer aided diagnostic systems to be adopted practically in the diagnostic process of life threatening diseases. INTRODUCTION Skin cancer is one of the most threatening cancers. It occurs by the abnormal multiplication of cells. The core three types of skin cells are: Squamous, Basal and Melanocytes. There are two wide classes of skin cancer; Melanocytic and non-Melanocytic. It is difficult to differentiate between benign and malignant melanoma, therefore dermatologists sometimes misclassify the benign and malignant melanoma. Melanoma is estimated as 19th most frequent cancer, it is riskier than the Basel and Squamous carcinoma because it rapidly spreads throughout the body. Hence, to lower the death risk, it is critical to diagnose the correct type of cancer in early rudimentary phases. It can occur on any part of body, but it has higher probability to occur on chest, back and legs. METHODS The paper presents a review of segmentation and classification techniques for skin lesion detection. Dermoscopy and its features are discussed briefly. After that Image pre-processing techniques are described. A thorough review of segmentation and classification phases of skin lesion detection using deep learning techniques is presented Literature is discussed and a comparative analysis of discussed methods is presented. CONCLUSION In this paper, we have presented the survey of more than 100 papers and comparative analysis of state of the art techniques, model and methodologies. Malignant melanoma is one of the most threating and deadliest cancers. Since the last few decades, researchers are putting extra attention and effort in accurate diagnosis of melanoma. The main challenges of dermoscopic skin lesion images are: low contrasts, multiple lesions, irregular and fuzzy borders, blood vessels, regression, hairs, bubbles, variegated coloring and other kinds of distortions. The lack of large training dataset makes these problems even more challenging. Due to recent advancement in the paradigm of deep learning, and specially the outstanding performance in medical imaging, it has become important to review the deep learning algorithms performance in skin lesion segmentation. Here, we have discussed the results of different techniques on the basis of different evaluation parameters such as Jaccard coefficient, sensitivity, specificity and accuracy. And the paper listed down the major achievements in this domain with the detailed discussion of the techniques. In future, it is expected to improve results by utilizing the capabilities of deep learning frameworks with other pre and post processing techniques so reliable and accurate diagnostic systems can be built.
Collapse
Affiliation(s)
- Ramsha Baig
- Department of Computer Science, Bahria University, Islamabad, Pakistan
| | - Maryam Bibi
- Department of Computer Science, Bahria University, Islamabad, Pakistan
| | - Anmol Hamid
- Department of Computer Science, Bahria University, Islamabad, Pakistan
| | - Sumaira Kausar
- Department of Computer Science, Bahria University, Islamabad, Pakistan
| | - Shahzad Khalid
- Department of Computer Engineering, Bahria University, Islamabad, Pakistan
| |
Collapse
|
195
|
Towards Accurate Diagnosis of Skin Lesions Using Feedforward Back Propagation Neural Networks. Diagnostics (Basel) 2021; 11:diagnostics11060936. [PMID: 34067493 PMCID: PMC8224667 DOI: 10.3390/diagnostics11060936] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 05/19/2021] [Accepted: 05/21/2021] [Indexed: 01/10/2023] Open
Abstract
In the automatic detection framework, there have been many attempts to develop models for real-time melanoma detection. To effectively discriminate benign and malign skin lesions, this work investigates sixty different architectures of the Feedforward Back Propagation Network (FFBPN), based on shape asymmetry for an optimal structural design that includes both the hidden neuron number and the input data selection. The reason for the choice of shape asymmetry was based on the 5–10% disagreement between dermatologists regarding the efficacy of asymmetry in the diagnosis of malignant melanoma. Asymmetry is quantified based on lesion shape (contour), moment of inertia of the lesion shape and histograms. The FFBPN has a high architecture flexibility, which indicates it as a favorable tool to avoid the over-parameterization of the ANN and, equally, to discard those redundant input datasets that usually result in poor test performance. The FFBPN was tested on four public image datasets containing melanoma, dysplastic nevus and nevus images. Experimental results on multiple benchmark data sets demonstrate that asymmetry A2 is a meaningful feature for skin lesion classification, and FFBPN with 16 neurons in the hidden layer can model the data without compromising prediction accuracy.
Collapse
|
196
|
Dildar M, Akram S, Irfan M, Khan HU, Ramzan M, Mahmood AR, Alsaiari SA, Saeed AHM, Alraddadi MO, Mahnashi MH. Skin Cancer Detection: A Review Using Deep Learning Techniques. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:5479. [PMID: 34065430 PMCID: PMC8160886 DOI: 10.3390/ijerph18105479] [Citation(s) in RCA: 104] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Revised: 04/26/2021] [Accepted: 05/13/2021] [Indexed: 12/11/2022]
Abstract
Skin cancer is one of the most dangerous forms of cancer. Skin cancer is caused by un-repaired deoxyribonucleic acid (DNA) in skin cells, which generate genetic defects or mutations on the skin. Skin cancer tends to gradually spread over other body parts, so it is more curable in initial stages, which is why it is best detected at early stages. The increasing rate of skin cancer cases, high mortality rate, and expensive medical treatment require that its symptoms be diagnosed early. Considering the seriousness of these issues, researchers have developed various early detection techniques for skin cancer. Lesion parameters such as symmetry, color, size, shape, etc. are used to detect skin cancer and to distinguish benign skin cancer from melanoma. This paper presents a detailed systematic review of deep learning techniques for the early detection of skin cancer. Research papers published in well-reputed journals, relevant to the topic of skin cancer diagnosis, were analyzed. Research findings are presented in tools, graphs, tables, techniques, and frameworks for better understanding.
Collapse
Affiliation(s)
- Mehwish Dildar
- Government Associate College for Women Mari Sargodha, Sargodha 40100, Pakistan;
| | - Shumaila Akram
- Department of Computer Science and Information Technology, University of Sargodha, Sargodha 40100, Pakistan;
| | - Muhammad Irfan
- Electrical Engineering Department, College of Engineering, Najran University Saudi Arabia, Najran 61441, Saudi Arabia;
| | - Hikmat Ullah Khan
- Department of Computer Science, Wah Campus, Comsats University, Wah Cantt 47040, Pakistan;
| | - Muhammad Ramzan
- Department of Computer Science and Information Technology, University of Sargodha, Sargodha 40100, Pakistan;
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore 54782, Pakistan
| | - Abdur Rehman Mahmood
- Department of Computer Science, COMSATS University Islamabad, Islamabad 440000, Pakistan;
| | - Soliman Ayed Alsaiari
- Department of Internal Medicine, Faculty of Medicine, Najran University, Najran 61441, Saudi Arabia;
| | - Abdul Hakeem M Saeed
- Department of Dermatology, Najran University Hospital, Najran 61441, Saudi Arabia;
| | | | - Mater Hussen Mahnashi
- Department of Medicinal Chemistry, Pharmacy School, Najran University, Najran 61441, Saudi Arabia;
| |
Collapse
|
197
|
Jiang J, Xie Q, Cheng Z, Cai J, Xia T, Yang H, Yang B, Peng H, Bai X, Yan M, Li X, Zhou J, Huang X, Wang L, Long H, Wang P, Chu Y, Zeng FW, Zhang X, Wang G, Zeng F. AI based colorectal disease detection using real-time screening colonoscopy. PRECISION CLINICAL MEDICINE 2021; 4:109-118. [PMID: 35694157 PMCID: PMC8982552 DOI: 10.1093/pcmedi/pbab013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 04/23/2021] [Accepted: 05/17/2021] [Indexed: 12/24/2022] Open
Abstract
Abstract
Colonoscopy is an effective tool for early screening of colorectal diseases. However, the application of colonoscopy in distinguishing different intestinal diseases still faces great challenges of efficiency and accuracy. Here we constructed and evaluated a deep convolution neural network (CNN) model based on 117 055 images from 16 004 individuals, which achieved a high accuracy of 0.933 in the validation dataset in identifying patients with polyp, colitis, colorectal cancer (CRC) from normal. The proposed approach was further validated on multi-center real-time colonoscopy videos and images, which achieved accurate diagnostic performance on detecting colorectal diseases with high accuracy and precision to generalize across external validation datasets. The diagnostic performance of the model was further compared to the skilled endoscopists and the novices. In addition, our model has potential in diagnosis of adenomatous polyp and hyperplastic polyp with an area under the receiver operating characteristic curve of 0.975. Our proposed CNN models have potential in assisting clinicians in making clinical decisions with efficiency during application.
Collapse
Affiliation(s)
- Jiawei Jiang
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou 635000, China
- Department of Computer Science, Eidgenossische Technische Hochschule Zurich, Zurich 999034, Switzerland
| | - Qianrong Xie
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou 635000, China
| | - Zhuo Cheng
- Digestive endoscopy center, Dazhou Central Hospital, Dazhou 635000, China
| | - Jianqiang Cai
- Department of Hepatobiliary Surgery, National Cancer Center/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Tian Xia
- National Center of Biomedical Analysis, Beijing 100850, China
| | - Hang Yang
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou 635000, China
| | - Bo Yang
- Digestive endoscopy center, Dazhou Central Hospital, Dazhou 635000, China
| | - Hui Peng
- College of Informatics, Huazhong Agricultural University, Wuhan 430070, China
| | - Xuesong Bai
- Digestive endoscopy center, Dazhou Central Hospital, Dazhou 635000, China
| | - Mingque Yan
- Digestive endoscopy center, Dazhou Central Hospital, Dazhou 635000, China
| | - Xue Li
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou 635000, China
| | - Jun Zhou
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou 635000, China
| | - Xuan Huang
- Department of Ophthalmology, Medical Research Center, Beijing Chao-Yang Hospital, Capital Medical University, Beijing 100020, China
| | - Liang Wang
- Information Department, Dazhou Central Hospital, Dazhou 635000, China
| | - Haiyan Long
- Digestive endoscopy center, Quxian People's Hospital, Dazhou 635000, China
| | - Pingxi Wang
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou 635000, China
| | - Yanpeng Chu
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou 635000, China
| | - Fan-Wei Zeng
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou 635000, China
| | - Xiuqin Zhang
- Institute of Molecular Medicine, Peking University, Beijing 100871, China
| | - Guangyu Wang
- State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Fanxin Zeng
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou 635000, China
- Department of Medicine, Sichuan University of Arts and Science, Dazhou 635000, China
| |
Collapse
|
198
|
Wang B, Yang J, Ai J, Luo N, An L, Feng H, Yang B, You Z. Accurate Tumor Segmentation via Octave Convolution Neural Network. Front Med (Lausanne) 2021; 8:653913. [PMID: 34095168 PMCID: PMC8169966 DOI: 10.3389/fmed.2021.653913] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Accepted: 03/24/2021] [Indexed: 11/13/2022] Open
Abstract
Three-dimensional (3D) liver tumor segmentation from Computed Tomography (CT) images is a prerequisite for computer-aided diagnosis, treatment planning, and monitoring of liver cancer. Despite many years of research, 3D liver tumor segmentation remains a challenging task. In this paper, we propose an effective and efficient method for tumor segmentation in liver CT images using encoder-decoder based octave convolution networks. Compared with other convolution networks utilizing standard convolution for feature extraction, the proposed method utilizes octave convolutions for learning multiple-spatial-frequency features, thus can better capture tumors with varying sizes and shapes. The proposed network takes advantage of a fully convolutional architecture which performs efficient end-to-end learning and inference. More importantly, we introduce a deep supervision mechanism during the learning process to combat potential optimization difficulties, and thus the model can acquire a much faster convergence rate and more powerful discrimination capability. Finally, we integrate octave convolutions into the encoder-decoder architecture of UNet, which can generate high resolution tumor segmentation in one single forward feeding without post-processing steps. Both architectures are trained on a subset of the LiTS (Liver Tumor Segmentation) Challenge. The proposed approach is shown to significantly outperform other networks in terms of various accuracy measures and processing speed.
Collapse
Affiliation(s)
- Bo Wang
- The State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing, China.,Innovation Center for Future Chips, Tsinghua University, Beijing, China.,Beijing Jingzhen Medical Technology Ltd., Beijing, China
| | - Jingyi Yang
- School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Jingyang Ai
- Beijing Jingzhen Medical Technology Ltd., Beijing, China
| | - Nana Luo
- Affiliated Hospital of Jining Medical University, Jining, China
| | - Lihua An
- Affiliated Hospital of Jining Medical University, Jining, China
| | - Haixia Feng
- Affiliated Hospital of Jining Medical University, Jining, China
| | - Bo Yang
- China Institute of Marine Technology & Economy, Beijing, China
| | - Zheng You
- The State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing, China.,Innovation Center for Future Chips, Tsinghua University, Beijing, China
| |
Collapse
|
199
|
Tao S, Jiang Y, Cao S, Wu C, Ma Z. Attention-Guided Network with Densely Connected Convolution for Skin Lesion Segmentation. SENSORS (BASEL, SWITZERLAND) 2021; 21:3462. [PMID: 34065771 PMCID: PMC8156456 DOI: 10.3390/s21103462] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 05/09/2021] [Accepted: 05/11/2021] [Indexed: 12/03/2022]
Abstract
The automatic segmentation of skin lesions is considered to be a key step in the diagnosis and treatment of skin lesions, which is essential to improve the survival rate of patients. However, due to the low contrast, the texture and boundary are difficult to distinguish, which makes the accurate segmentation of skin lesions challenging. To cope with these challenges, this paper proposes an attention-guided network with densely connected convolution for skin lesion segmentation, called CSAG and DCCNet. In the last step of the encoding path, the model uses densely connected convolution to replace the ordinary convolutional layer. A novel attention-oriented filter module called Channel Spatial Fast Attention-guided Filter (CSFAG for short) was designed and embedded in the skip connection of the CSAG and DCCNet. On the ISIC-2017 data set, a large number of ablation experiments have verified the superiority and robustness of the CSFAG module and Densely Connected Convolution. The segmentation performance of CSAG and DCCNet is compared with other latest algorithms, and very competitive results have been achieved in all indicators. The robustness and cross-data set performance of our method was tested on another publicly available data set PH2, further verifying the effectiveness of the model.
Collapse
|
200
|
Developing a Recognition System for Diagnosing Melanoma Skin Lesions Using Artificial Intelligence Algorithms. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:9998379. [PMID: 34055044 PMCID: PMC8143893 DOI: 10.1155/2021/9998379] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Revised: 04/12/2021] [Accepted: 04/29/2021] [Indexed: 11/17/2022]
Abstract
In recent years, computerized biomedical imaging and analysis have become extremely promising, more interesting, and highly beneficial. They provide remarkable information in the diagnoses of skin lesions. There have been developments in modern diagnostic systems that can help detect melanoma in its early stages to save the lives of many people. There is also a significant growth in the design of computer-aided diagnosis (CAD) systems using advanced artificial intelligence. The purpose of the present research is to develop a system to diagnose skin cancer, one that will lead to a high level of detection of the skin cancer. The proposed system was developed using deep learning and traditional artificial intelligence machine learning algorithms. The dermoscopy images were collected from the PH2 and ISIC 2018 in order to examine the diagnose system. The developed system is divided into feature-based and deep leaning. The feature-based system was developed based on feature-extracting methods. In order to segment the lesion from dermoscopy images, the active contour method was proposed. These skin lesions were processed using hybrid feature extractions, namely, the Local Binary Pattern (LBP) and Gray Level Co-occurrence Matrix (GLCM) methods to extract the texture features. The obtained features were then processed using the artificial neural network (ANNs) algorithm. In the second system, the convolutional neural network (CNNs) algorithm was applied for the efficient classification of skin diseases; the CNNs were pretrained using large AlexNet and ResNet50 transfer learning models. The experimental results show that the proposed method outperformed the state-of-art methods for HP2 and ISIC 2018 datasets. Standard evaluation metrics like accuracy, specificity, sensitivity, precision, recall, and F-score were employed to evaluate the results of the two proposed systems. The ANN model achieved the highest accuracy for PH2 (97.50%) and ISIC 2018 (98.35%) compared with the CNN model. The evaluation and comparison, proposed systems for classification and detection of melanoma are presented.
Collapse
|