51
|
Alam MS, Wang D, Liao Q, Sowmya A. A Multi-Scale Context Aware Attention Model for Medical Image Segmentation. IEEE J Biomed Health Inform 2023; 27:3731-3739. [PMID: 37015493 DOI: 10.1109/jbhi.2022.3227540] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Medical image segmentation is critical for efficient diagnosis of diseases and treatment planning. In recent years, convolutional neural networks (CNN)-based methods, particularly U-Net and its variants, have achieved remarkable results on medical image segmentation tasks. However, they do not always work consistently on images with complex structures and large variations in regions of interest (ROI). This could be due to the fixed geometric structure of the receptive fields used for feature extraction and repetitive down-sampling operations that lead to information loss. To overcome these problems, the standard U-Net architecture is modified in this work by replacing the convolution block with a dilated convolution block to extract multi-scale context features with varying sizes of receptive fields, and adding a dilated inception block between the encoder and decoder paths to alleviate the problem of information recession and the semantic gap between features. Furthermore, the input of each dilated convolution block is added to the output through a squeeze and excitation unit, which alleviates the vanishing gradient problem and improves overall feature representation by re-weighting the channel-wise feature responses. The original inception block is modified by reducing the size of the spatial filter and introducing dilated convolution to obtain a larger receptive field. The proposed network was validated on three challenging medical image segmentation tasks with varying size ROIs: lung segmentation on chest X-ray (CXR) images, skin lesion segmentation on dermoscopy images and nucleus segmentation on microscopy cell images. Improved performance compared to state-of-the-art techniques demonstrates the effectiveness and generalisability of the proposed Dilated Convolution and Inception blocks-based U-Net (DCI-UNet).
Collapse
|
52
|
Abbas Q, Daadaa Y, Rashid U, Ibrahim MEA. Assist-Dermo: A Lightweight Separable Vision Transformer Model for Multiclass Skin Lesion Classification. Diagnostics (Basel) 2023; 13:2531. [PMID: 37568894 PMCID: PMC10417387 DOI: 10.3390/diagnostics13152531] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 07/22/2023] [Accepted: 07/26/2023] [Indexed: 08/13/2023] Open
Abstract
A dermatologist-like automatic classification system is developed in this paper to recognize nine different classes of pigmented skin lesions (PSLs), using a separable vision transformer (SVT) technique to assist clinical experts in early skin cancer detection. In the past, researchers have developed a few systems to recognize nine classes of PSLs. However, they often require enormous computations to achieve high performance, which is burdensome to deploy on resource-constrained devices. In this paper, a new approach to designing SVT architecture is developed based on SqueezeNet and depthwise separable CNN models. The primary goal is to find a deep learning architecture with few parameters that has comparable accuracy to state-of-the-art (SOTA) architectures. This paper modifies the SqueezeNet design for improved runtime performance by utilizing depthwise separable convolutions rather than simple conventional units. To develop this Assist-Dermo system, a data augmentation technique is applied to control the PSL imbalance problem. Next, a pre-processing step is integrated to select the most dominant region and then enhance the lesion patterns in a perceptual-oriented color space. Afterwards, the Assist-Dermo system is designed to improve efficacy and performance with several layers and multiple filter sizes but fewer filters and parameters. For the training and evaluation of Assist-Dermo models, a set of PSL images is collected from different online data sources such as Ph2, ISBI-2017, HAM10000, and ISIC to recognize nine classes of PSLs. On the chosen dataset, it achieves an accuracy (ACC) of 95.6%, a sensitivity (SE) of 96.7%, a specificity (SP) of 95%, and an area under the curve (AUC) of 0.95. The experimental results show that the suggested Assist-Dermo technique outperformed SOTA algorithms when recognizing nine classes of PSLs. The Assist-Dermo system performed better than other competitive systems and can support dermatologists in the diagnosis of a wide variety of PSLs through dermoscopy. The Assist-Dermo model code is freely available on GitHub for the scientific community.
Collapse
Affiliation(s)
- Qaisar Abbas
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (Y.D.); (M.E.A.I.)
| | - Yassine Daadaa
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (Y.D.); (M.E.A.I.)
| | - Umer Rashid
- Department of Computer Science, Quaid-i-Azam University, Islamabad 44000, Pakistan;
| | - Mostafa E. A. Ibrahim
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (Y.D.); (M.E.A.I.)
- Department of Electrical Engineering, Benha Faculty of Engineering, Benha University, Qalubia, Benha 13518, Egypt
| |
Collapse
|
53
|
Khan S, Ali H, Shah Z. Identifying the role of vision transformer for skin cancer-A scoping review. Front Artif Intell 2023; 6:1202990. [PMID: 37529760 PMCID: PMC10388102 DOI: 10.3389/frai.2023.1202990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 07/03/2023] [Indexed: 08/03/2023] Open
Abstract
Introduction Detecting and accurately diagnosing early melanocytic lesions is challenging due to extensive intra- and inter-observer variabilities. Dermoscopy images are widely used to identify and study skin cancer, but the blurred boundaries between lesions and besieging tissues can lead to incorrect identification. Artificial Intelligence (AI) models, including vision transformers, have been proposed as a solution, but variations in symptoms and underlying effects hinder their performance. Objective This scoping review synthesizes and analyzes the literature that uses vision transformers for skin lesion detection. Methods The review follows the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Revise) guidelines. The review searched online repositories such as IEEE Xplore, Scopus, Google Scholar, and PubMed to retrieve relevant articles. After screening and pre-processing, 28 studies that fulfilled the inclusion criteria were included. Results and discussions The review found that the use of vision transformers for skin cancer detection has rapidly increased from 2020 to 2022 and has shown outstanding performance for skin cancer detection using dermoscopy images. Along with highlighting intrinsic visual ambiguities, irregular skin lesion shapes, and many other unwanted challenges, the review also discusses the key problems that obfuscate the trustworthiness of vision transformers in skin cancer diagnosis. This review provides new insights for practitioners and researchers to understand the current state of knowledge in this specialized research domain and outlines the best segmentation techniques to identify accurate lesion boundaries and perform melanoma diagnosis. These findings will ultimately assist practitioners and researchers in making more authentic decisions promptly.
Collapse
Affiliation(s)
| | | | - Zubair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| |
Collapse
|
54
|
Shao D, Ren L, Ma L. MSF-Net: A Lightweight Multi-Scale Feature Fusion Network for Skin Lesion Segmentation. Biomedicines 2023; 11:1733. [PMID: 37371828 DOI: 10.3390/biomedicines11061733] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 06/08/2023] [Accepted: 06/12/2023] [Indexed: 06/29/2023] Open
Abstract
Segmentation of skin lesion images facilitates the early diagnosis of melanoma. However, this remains a challenging task due to the diversity of target scales, irregular segmentation shapes, low contrast, and blurred boundaries of dermatological graphics. This paper proposes a multi-scale feature fusion network (MSF-Net) based on comprehensive attention convolutional neural network (CA-Net). We introduce the spatial attention mechanism in the convolution block through the residual connection to focus on the key regions. Meanwhile, Multi-scale Dilated Convolution Modules (MDC) and Multi-scale Feature Fusion Modules (MFF) are introduced to extract context information across scales and adaptively adjust the receptive field size of the feature map. We conducted many experiments on the public data set ISIC2018 to verify the validity of MSF-Net. The ablation experiment demonstrated the effectiveness of our three modules. The comparison experiment with the existing advanced network confirms that MSF-Net can achieve better segmentation under fewer parameters.
Collapse
Affiliation(s)
- Dangguo Shao
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
- Yunnan Key Laboratory of Artificial Intelligence, Kunming University of Science and Technology, Kunming 650500, China
| | - Lifan Ren
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Lei Ma
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| |
Collapse
|
55
|
Mahmud S, Abbas TO, Mushtak A, Prithula J, Chowdhury MEH. Kidney Cancer Diagnosis and Surgery Selection by Machine Learning from CT Scans Combined with Clinical Metadata. Cancers (Basel) 2023; 15:3189. [PMID: 37370799 DOI: 10.3390/cancers15123189] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 05/30/2023] [Accepted: 06/07/2023] [Indexed: 06/29/2023] Open
Abstract
Kidney cancers are one of the most common malignancies worldwide. Accurate diagnosis is a critical step in the management of kidney cancer patients and is influenced by multiple factors including tumor size or volume, cancer types and stages, etc. For malignant tumors, partial or radical surgery of the kidney might be required, but for clinicians, the basis for making this decision is often unclear. Partial nephrectomy could result in patient death due to cancer if kidney removal was necessary, whereas radical nephrectomy in less severe cases could resign patients to lifelong dialysis or need for future transplantation without sufficient cause. Using machine learning to consider clinical data alongside computed tomography images could potentially help resolve some of these surgical ambiguities, by enabling a more robust classification of kidney cancers and selection of optimal surgical approaches. In this study, we used the publicly available KiTS dataset of contrast-enhanced CT images and corresponding patient metadata to differentiate four major classes of kidney cancer: clear cell (ccRCC), chromophobe (chRCC), papillary (pRCC) renal cell carcinoma, and oncocytoma (ONC). We rationalized these data to overcome the high field of view (FoV), extract tumor regions of interest (ROIs), classify patients using deep machine-learning models, and extract/post-process CT image features for combination with clinical data. Regardless of marked data imbalance, our combined approach achieved a high level of performance (85.66% accuracy, 84.18% precision, 85.66% recall, and 84.92% F1-score). When selecting surgical procedures for malignant tumors (RCC), our method proved even more reliable (90.63% accuracy, 90.83% precision, 90.61% recall, and 90.50% F1-score). Using feature ranking, we confirmed that tumor volume and cancer stage are the most relevant clinical features for predicting surgical procedures. Once fully mature, the approach we propose could be used to assist surgeons in performing nephrectomies by guiding the choices of optimal procedures in individual patients with kidney cancer.
Collapse
Affiliation(s)
- Sakib Mahmud
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Tariq O Abbas
- Urology Division, Surgery Department, Sidra Medicine, Doha 26999, Qatar
- Department of Surgery, Weill Cornell Medicine-Qatar, Doha 24811, Qatar
- College of Medicine, Qatar University, Doha 2713, Qatar
| | - Adam Mushtak
- Clinical Imaging Department, Hamad Medical Corporation, Doha 3050, Qatar
| | - Johayra Prithula
- Department of Electrical and Electronics Engineering, University of Dhaka, Dhaka 1000, Bangladesh
| | | |
Collapse
|
56
|
Mirikharaji Z, Abhishek K, Bissoto A, Barata C, Avila S, Valle E, Celebi ME, Hamarneh G. A survey on deep learning for skin lesion segmentation. Med Image Anal 2023; 88:102863. [PMID: 37343323 DOI: 10.1016/j.media.2023.102863] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 02/01/2023] [Accepted: 05/31/2023] [Indexed: 06/23/2023]
Abstract
Skin cancer is a major public health problem that could benefit from computer-aided diagnosis to reduce the burden of this common disease. Skin lesion segmentation from images is an important step toward achieving this goal. However, the presence of natural and artificial artifacts (e.g., hair and air bubbles), intrinsic factors (e.g., lesion shape and contrast), and variations in image acquisition conditions make skin lesion segmentation a challenging task. Recently, various researchers have explored the applicability of deep learning models to skin lesion segmentation. In this survey, we cross-examine 177 research papers that deal with deep learning-based segmentation of skin lesions. We analyze these works along several dimensions, including input data (datasets, preprocessing, and synthetic data generation), model design (architecture, modules, and losses), and evaluation aspects (data annotation requirements and segmentation performance). We discuss these dimensions both from the viewpoint of select seminal works, and from a systematic viewpoint, examining how those choices have influenced current trends, and how their limitations should be addressed. To facilitate comparisons, we summarize all examined works in a comprehensive table as well as an interactive table available online3.
Collapse
Affiliation(s)
- Zahra Mirikharaji
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Kumar Abhishek
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Alceu Bissoto
- RECOD.ai Lab, Institute of Computing, University of Campinas, Av. Albert Einstein 1251, Campinas 13083-852, Brazil
| | - Catarina Barata
- Institute for Systems and Robotics, Instituto Superior Técnico, Avenida Rovisco Pais, Lisbon 1049-001, Portugal
| | - Sandra Avila
- RECOD.ai Lab, Institute of Computing, University of Campinas, Av. Albert Einstein 1251, Campinas 13083-852, Brazil
| | - Eduardo Valle
- RECOD.ai Lab, School of Electrical and Computing Engineering, University of Campinas, Av. Albert Einstein 400, Campinas 13083-952, Brazil
| | - M Emre Celebi
- Department of Computer Science and Engineering, University of Central Arkansas, 201 Donaghey Ave., Conway, AR 72035, USA.
| | - Ghassan Hamarneh
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada.
| |
Collapse
|
57
|
Qin C, Zheng B, Zeng J, Chen Z, Zhai Y, Genovese A, Piuri V, Scotti F. Dynamically aggregating MLPs and CNNs for skin lesion segmentation with geometry regularization. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 238:107601. [PMID: 37210926 DOI: 10.1016/j.cmpb.2023.107601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 04/24/2023] [Accepted: 05/13/2023] [Indexed: 05/23/2023]
Abstract
BACKGROUND AND OBJECTIVE Melanoma is a highly malignant skin tumor. Accurate segmentation of skin lesions from dermoscopy images is pivotal for computer-aided diagnosis of melanoma. However, blurred lesion boundaries, variable lesion shapes, and other interference factors pose a challenge in this regard. METHODS This work proposes a novel framework called CFF-Net (Cross Feature Fusion Network) for supervised skin lesion segmentation. The encoder of the network includes dual branches, where the CNNs branch aims to extract rich local features while MLPs branch is used to establish both the global-spatial-dependencies and global-channel-dependencies for precise delineation of skin lesions. Besides, a feature-interaction module between two branches is designed for strengthening the feature representation by allowing dynamic exchange of spatial and channel information, so as to retain more spatial details and inhibit irrelevant noise. Moreover, an auxiliary prediction task is introduced to learn the global geometric information, highlighting the boundary of the skin lesion. RESULTS Comprehensive experiments using four publicly available skin lesion datasets (i.e., ISIC 2018, ISIC 2017, ISIC 2016, and PH2) indicated that CFF-Net outperformed the state-of-the-art models. In particular, CFF-Net greatly increased the average Jaccard Index score from 79.71% to 81.86% in ISIC 2018, from 78.03% to 80.21% in ISIC 2017, from 82.58% to 85.38% in ISIC 2016, and from 84.18% to 89.71% in PH2 compared with U-Net. Ablation studies demonstrated the effectiveness of each proposed component. Cross-validation experiments in ISIC 2018 and PH2 datasets verified the generalizability of CFF-Net under different skin lesion data distributions. Finally, comparison experiments using three public datasets demonstrated the superior performance of our model. CONCLUSION The proposed CFF-Net performed well in four public skin lesion datasets, especially for challenging cases with blurred edges of skin lesions and low contrast between skin lesions and background. CFF-Net can be employed for other segmentation tasks with better prediction and more accurate delineation of boundaries.
Collapse
Affiliation(s)
- Chuanbo Qin
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen 529020, China
| | - Bin Zheng
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen 529020, China
| | - Junying Zeng
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen 529020, China.
| | - Zhuyuan Chen
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen 529020, China
| | - Yikui Zhai
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen 529020, China
| | - Angelo Genovese
- Departimento di Information, Università degli Studi di Milano, 20133 Milano, Italy
| | - Vincenzo Piuri
- Departimento di Information, Università degli Studi di Milano, 20133 Milano, Italy
| | - Fabio Scotti
- Departimento di Information, Università degli Studi di Milano, 20133 Milano, Italy
| |
Collapse
|
58
|
Yang T, He Q, Huang L. OM-NAS: pigmented skin lesion image classification based on a neural architecture search. BIOMEDICAL OPTICS EXPRESS 2023; 14:2153-2165. [PMID: 37206141 PMCID: PMC10191671 DOI: 10.1364/boe.483828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 03/07/2023] [Accepted: 04/05/2023] [Indexed: 05/21/2023]
Abstract
Because pigmented skin lesion image classification based on manually designed convolutional neural networks (CNNs) requires abundant experience in neural network design and considerable parameter tuning, we proposed the macro operation mutation-based neural architecture search (OM-NAS) approach in order to automatically build a CNN for image classification of pigmented skin lesions. We first used an improved search space that was oriented toward cells and contained micro and macro operations. The macro operations include InceptionV1, Fire and other well-designed neural network modules. During the search process, an evolutionary algorithm based on macro operation mutation was employed to iteratively change the operation type and connection mode of parent cells so that the macro operation was inserted into the child cell similar to the injection of virus into host DNA. Ultimately, the searched best cells were stacked to build a CNN for the image classification of pigmented skin lesions, which was then assessed on the HAM10000 and ISIC2017 datasets. The test results showed that the CNN built with this approach was more accurate than or almost as accurate as state-of-the-art (SOTA) approaches such as AmoebaNet, InceptionV3 + Attention and ARL-CNN in terms of image classification. The average sensitivity of this method on the HAM10000 and ISIC2017 datasets was 72.4% and 58.5%, respectively.
Collapse
Affiliation(s)
- Tiejun Yang
- College of Intelligent Medicine and Biotechnology,
Guilin Medical University, Guilin, 541199 Guangxi, China
| | - Qing He
- Guangxi Key Laboratory of Embedded Technology and Intelligent System,
Guilin University of Technology, Guilin, 541006 Guangxi, China
| | - Lin Huang
- Guangxi Key Laboratory of Embedded Technology and Intelligent System,
Guilin University of Technology, Guilin, 541006 Guangxi, China
| |
Collapse
|
59
|
Ain QU, Al-Sahaf H, Xue B, Zhang M. Automatically Diagnosing Skin Cancers From Multimodality Images Using Two-Stage Genetic Programming. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:2727-2740. [PMID: 35797327 DOI: 10.1109/tcyb.2022.3182474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Developing a computer-aided diagnostic system for detecting various skin malignancies from images has attracted many researchers. Unlike many machine-learning approaches, such as artificial neural networks, genetic programming (GP) automatically evolves models with flexible representation. GP successfully provides effective solutions using its intrinsic ability to select prominent features (i.e., feature selection) and build new features (i.e., feature construction). Existing approaches have utilized GP to construct new features from the complete set of original features and the set of operators. However, the complete set of features may contain redundant or irrelevant features that do not provide useful information for classification. This study aims to develop a two-stage GP method, where the first stage selects prominent features, and the second stage constructs new features from these selected features and operators, such as multiplication in a wrapper approach to improve the classification performance. To include local, global, texture, color, and multiscale image properties of skin images, GP selects and constructs features extracted from local binary patterns and pyramid-structured wavelet decomposition. The accuracy of this GP method is assessed using two real-world skin image datasets captured from the standard camera and specialized instruments, and compared with commonly used classification algorithms, three state of the art, and an existing embedded GP method. The results reveal that this new approach of feature selection and feature construction effectively helps improve the performance of the machine-learning classification algorithms. Unlike other black-box models, the evolved models by GP are interpretable; therefore, the proposed method can assist dermatologists to identify prominent features, which has been shown by further analysis on the evolved models.
Collapse
|
60
|
Dahou A, Aseeri AO, Mabrouk A, Ibrahim RA, Al-Betar MA, Elaziz MA. Optimal Skin Cancer Detection Model Using Transfer Learning and Dynamic-Opposite Hunger Games Search. Diagnostics (Basel) 2023; 13:diagnostics13091579. [PMID: 37174970 PMCID: PMC10178333 DOI: 10.3390/diagnostics13091579] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 04/21/2023] [Accepted: 04/25/2023] [Indexed: 05/15/2023] Open
Abstract
Recently, pre-trained deep learning (DL) models have been employed to tackle and enhance the performance on many tasks such as skin cancer detection instead of training models from scratch. However, the existing systems are unable to attain substantial levels of accuracy. Therefore, we propose, in this paper, a robust skin cancer detection framework for to improve the accuracy by extracting and learning relevant image representations using a MobileNetV3 architecture. Thereafter, the extracted features are used as input to a modified Hunger Games Search (HGS) based on Particle Swarm Optimization (PSO) and Dynamic-Opposite Learning (DOLHGS). This modification is used as a novel feature selection to alloacte the most relevant feature to maximize the model's performance. For evaluation of the efficiency of the developed DOLHGS, the ISIC-2016 dataset and the PH2 dataset were employed, including two and three categories, respectively. The proposed model has accuracy 88.19% on the ISIC-2016 dataset and 96.43% on PH2. Based on the experimental results, the proposed approach showed more accurate and efficient performance in skin cancer detection than other well-known and popular algorithms in terms of classification accuracy and optimized features.
Collapse
Affiliation(s)
- Abdelghani Dahou
- Mathematics and Computer Science Department, University of Ahmed DRAIA, Adrar 01000, Algeria
| | - Ahmad O Aseeri
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| | - Alhassan Mabrouk
- Mathematics and Computer Science Department, Faculty of Science, Beni-Suef University, Beni-Suef 65214, Egypt
| | - Rehab Ali Ibrahim
- Department of Mathematics, Faculty of Science, Zagazig University, Zagazig 44519, Egypt
| | - Mohammed Azmi Al-Betar
- Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman P.O. Box 346, United Arab Emirates
| | - Mohamed Abd Elaziz
- Department of Mathematics, Faculty of Science, Zagazig University, Zagazig 44519, Egypt
- Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman P.O. Box 346, United Arab Emirates
- Faculty of Computer Science & Engineering, Galala University, Suez 43511, Egypt
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos 10999, Lebanon
| |
Collapse
|
61
|
Sun Y, Lou W, Ma W, Zhao F, Su Z. Convolution Neural Network with Coordinate Attention for Real-Time Wound Segmentation and Automatic Wound Assessment. Healthcare (Basel) 2023; 11:healthcare11091205. [PMID: 37174747 PMCID: PMC10178407 DOI: 10.3390/healthcare11091205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Revised: 04/03/2023] [Accepted: 04/12/2023] [Indexed: 05/15/2023] Open
Abstract
BACKGROUND Wound treatment in emergency care requires the rapid assessment of wound size by medical staff. Limited medical resources and the empirical assessment of wounds can delay the treatment of patients, and manual contact measurement methods are often inaccurate and susceptible to wound infection. This study aimed to prepare an Automatic Wound Segmentation Assessment (AWSA) framework for real-time wound segmentation and automatic wound region estimation. METHODS This method comprised a short-term dense concatenate classification network (STDC-Net) as the backbone, realizing a segmentation accuracy-prediction speed trade-off. A coordinated attention mechanism was introduced to further improve the network segmentation performance. A functional relationship model between prior graphics pixels and shooting heights was constructed to achieve wound area measurement. Finally, extensive experiments on two types of wound datasets were conducted. RESULTS The experimental results showed that real-time AWSA outperformed state-of-the-art methods such as mAP, mIoU, recall, and dice score. The AUC value, which reflected the comprehensive segmentation ability, also reached the highest level of about 99.5%. The FPS values of our proposed segmentation method in the two datasets were 100.08 and 102.11, respectively, which were about 42% higher than those of the second-ranked method, reflecting better real-time performance. Moreover, real-time AWSA could automatically estimate the wound area in square centimeters with a relative error of only about 3.1%. CONCLUSION The real-time AWSA method used the STDC-Net classification network as its backbone and improved the network processing speed while accurately segmenting the wound, realizing a segmentation accuracy-prediction speed trade-off.
Collapse
Affiliation(s)
- Yi Sun
- National Key Laboratory of Electro-Mechanics Engineering and Control, School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100010, China
- Beijing Institute of Technology Chongqing Innovation Center, Chongqing 401120, China
| | - Wenzhong Lou
- National Key Laboratory of Electro-Mechanics Engineering and Control, School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100010, China
- Beijing Institute of Technology Chongqing Innovation Center, Chongqing 401120, China
| | - Wenlong Ma
- National Key Laboratory of Electro-Mechanics Engineering and Control, School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100010, China
| | - Fei Zhao
- National Key Laboratory of Electro-Mechanics Engineering and Control, School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100010, China
| | - Zilong Su
- National Key Laboratory of Electro-Mechanics Engineering and Control, School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100010, China
| |
Collapse
|
62
|
Dandu R, Vinayaka Murthy M, Ravi Kumar Y. Transfer learning for segmentation with hybrid classification to Detect Melanoma Skin Cancer. Heliyon 2023; 9:e15416. [PMID: 37151638 PMCID: PMC10161578 DOI: 10.1016/j.heliyon.2023.e15416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 04/06/2023] [Accepted: 04/06/2023] [Indexed: 05/09/2023] Open
Abstract
Melanoma is an abnormal proliferation of skin cells that arises and develops in most of the cases on surface of skin that is exposed to copious amounts of sunlight. This common type of cancer may develop in areas of the skin that are not exposed to a much abundant sunlight. The research addresses the problem of Segmentation and Classification of Melanoma Skin Cancer. Melanoma is the fifth most common skin cancer lesion. Bio-medical Imaging and Analysis has become more promising, interesting, and beneficial in recent years to address the eventual problems of Melanoma Skin Cancerous Tissues that may develop on Skin Surfaces. The evolved research finds that Attributes Selected for Classification with Color Layout Filter model. The research has produced an optimal result in terms of certain performance metrics accuracy, precision, recall, PRC (what is PRC? Expansion is needed in Abstract), The proposed method has yielded 90.96% of accuracy and 91% percent of precise and 0.91 of recall out of 1.0, 0.95 of ROC AUC, 0.87 of Kappa Statistic, 0.91 of F-Measure. It has been noticed a lowest error with reference to proposed method on certain dataset. Finally, this research recommends that the Attribute Selected Classifier by implementing one of the image enhancement techniques like Color Layout Filter is showing an efficient outcome.
Collapse
|
63
|
Karri M, Annavarapu CSR, Acharya UR. Skin lesion segmentation using two-phase cross-domain transfer learning framework. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107408. [PMID: 36805279 DOI: 10.1016/j.cmpb.2023.107408] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/25/2022] [Revised: 01/31/2023] [Accepted: 02/04/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning (DL) models have been used for medical imaging for a long time but they did not achieve their full potential in the past because of insufficient computing power and scarcity of training data. In recent years, we have seen substantial growth in DL networks because of improved technology and an abundance of data. However, previous studies indicate that even a well-trained DL algorithm may struggle to generalize data from multiple sources because of domain shifts. Additionally, ineffectiveness of basic data fusion methods, complexity of segmentation target and low interpretability of current DL models limit their use in clinical decisions. To meet these challenges, we present a new two-phase cross-domain transfer learning system for effective skin lesion segmentation from dermoscopic images. METHODS Our system is based on two significant technical inventions. We examine a two- phase cross-domain transfer learning approach, including model-level and data-level transfer learning, by fine-tuning the system on two datasets, MoleMap and ImageNet. We then present nSknRSUNet, a high-performing DL network, for skin lesion segmentation using broad receptive fields and spatial edge attention feature fusion. We examine the trained model's generalization capabilities on skin lesion segmentation to quantify these two inventions. We cross-examine the model using two skin lesion image datasets, MoleMap and HAM10000, obtained from varied clinical contexts. RESULTS At data-level transfer learning for the HAM10000 dataset, the proposed model obtained 94.63% of DSC and 99.12% accuracy. In cross-examination at data-level transfer learning for the Molemap dataset, the proposed model obtained 93.63% of DSC and 97.01% of accuracy. CONCLUSION Numerous experiments reveal that our system produces excellent performance and improves upon state-of-the-art methods on both qualitative and quantitative measures.
Collapse
Affiliation(s)
- Meghana Karri
- Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines), Dhanbad, 826004, Jharkhand, India.
| | - Chandra Sekhara Rao Annavarapu
- Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines), Dhanbad, 826004, Jharkhand, India.
| | - U Rajendra Acharya
- Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, 599489, Singapore; Department of Biomedical Engineering, School of science and Technology, SUSS university, Singapore; Department of Biomedical Informatics and Medical Engineering, Asia university, Taichung, Taiwan.
| |
Collapse
|
64
|
Wang J, Fang Z, Yao S, Yang F. Ellipse guided multi-task network for fetal head circumference measurement. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
|
65
|
Wang L, Zhang L, Shu X, Yi Z. Intra-class consistency and inter-class discrimination feature learning for automatic skin lesion classification. Med Image Anal 2023; 85:102746. [PMID: 36638748 DOI: 10.1016/j.media.2023.102746] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 10/24/2022] [Accepted: 01/05/2023] [Indexed: 01/09/2023]
Abstract
Automated skin lesion classification has been proved to be capable of improving the diagnostic performance for dermoscopic images. Although many successes have been achieved, accurate classification remains challenging due to the significant intra-class variation and inter-class similarity. In this article, a deep learning method is proposed to increase the intra-class consistency as well as the inter-class discrimination of learned features in the automatic skin lesion classification. To enhance the inter-class discriminative feature learning, a CAM-based (class activation mapping) global-lesion localization module is proposed by optimizing the distance of CAMs for the same dermoscopic image generated by different skin lesion tasks. Then, a global features guided intra-class similarity learning module is proposed to generate the class center according to the deep features of all samples in one class and the history feature of one sample during the learning process. In this way, the performance can be improved with the collaboration of CAM-based inter-class feature discriminating and global features guided intra-class feature concentrating. To evaluate the effectiveness of the proposed method, extensive experiments are conducted on the ISIC-2017 and ISIC-2018 datasets. Experimental results with different backbones have demonstrated that the proposed method has good generalizability and can adaptively focus on more discriminative regions of the skin lesion.
Collapse
Affiliation(s)
- Lituan Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China.
| | - Xin Shu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| |
Collapse
|
66
|
Ali Z, Naz S, Zaffar H, Choi J, Kim Y. An IoMT-Based Melanoma Lesion Segmentation Using Conditional Generative Adversarial Networks. SENSORS (BASEL, SWITZERLAND) 2023; 23:3548. [PMID: 37050607 PMCID: PMC10098854 DOI: 10.3390/s23073548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 02/03/2023] [Accepted: 03/25/2023] [Indexed: 06/19/2023]
Abstract
Currently, Internet of medical things-based technologies provide a foundation for remote data collection and medical assistance for various diseases. Along with developments in computer vision, the application of Artificial Intelligence and Deep Learning in IOMT devices aids in the design of effective CAD systems for various diseases such as melanoma cancer even in the absence of experts. However, accurate segmentation of melanoma skin lesions from images by CAD systems is necessary to carry out an effective diagnosis. Nevertheless, the visual similarity between normal and melanoma lesions is very high, which leads to less accuracy of various traditional, parametric, and deep learning-based methods. Hence, as a solution to the challenge of accurate segmentation, we propose an advanced generative deep learning model called the Conditional Generative Adversarial Network (cGAN) for lesion segmentation. In the suggested technique, the generation of segmented images is conditional on dermoscopic images of skin lesions to generate accurate segmentation. We assessed the proposed model using three distinct datasets including DermQuest, DermIS, and ISCI2016, and attained optimal segmentation results of 99%, 97%, and 95% performance accuracy, respectively.
Collapse
Affiliation(s)
- Zeeshan Ali
- R & D Setups, National University of Computer and Emerging Sciences, Islamabad 44000, Pakistan
| | - Sheneela Naz
- Department of Computer Science, COMSATS University Islamabad, Islamabad 45550, Pakistan
| | - Hira Zaffar
- Department of Computer Science, Air University, Aerospace and Aviation Kamra Campus, Islamabad 44000, Pakistan
| | - Jaeun Choi
- College of Business, Kwangwoon University, Seoul 01897, Republic of Korea
| | - Yongsung Kim
- Department of Technology Education, Chungnam National University, Daejeon 34134, Republic of Korea
| |
Collapse
|
67
|
Li M, Lu Y, Cao S, Wang X, Xie S. A Hyperspectral Image Classification Method Based on the Nonlocal Attention Mechanism of a Multiscale Convolutional Neural Network. SENSORS (BASEL, SWITZERLAND) 2023; 23:3190. [PMID: 36991898 PMCID: PMC10052326 DOI: 10.3390/s23063190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/14/2023] [Accepted: 03/14/2023] [Indexed: 06/19/2023]
Abstract
Recently, convolution neural networks have been widely used in hyperspectral image classification and have achieved excellent performance. However, the fixed convolution kernel receptive field often leads to incomplete feature extraction, and the high redundancy of spectral information leads to difficulties in spectral feature extraction. To solve these problems, we propose a nonlocal attention mechanism of a 2D-3D hybrid CNN (2-3D-NL CNN), which includes an inception block and a nonlocal attention module. The inception block uses convolution kernels of different sizes to equip the network with multiscale receptive fields to extract the multiscale spatial features of ground objects. The nonlocal attention module enables the network to obtain a more comprehensive receptive field in the spatial and spectral dimensions while suppressing the information redundancy of the spectral dimension, making the extraction of spectral features easier. Experiments on two hyperspectral datasets, Pavia University and Salians, validate the effectiveness of the inception block and the nonlocal attention module. The results show that our model achieves an overall classification accuracy of 99.81% and 99.42% on the two datasets, respectively, which is higher than the accuracy of the existing model.
Collapse
Affiliation(s)
- Mingtian Li
- Institute of Remote Sensing and Earth Sciences, School of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China
| | - Yu Lu
- SenseTime Research, Shenzhen 518000, China
| | - Shixian Cao
- Institute of Remote Sensing and Earth Sciences, School of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China
| | - Xinyu Wang
- Institute of Remote Sensing and Earth Sciences, School of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China
| | - Shanjuan Xie
- Institute of Remote Sensing and Earth Sciences, School of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China
- Zhejiang Provincial Key Laboratory of Urban Wetlands and Regional Change, Hangzhou Normal University, Hangzhou 311121, China
| |
Collapse
|
68
|
Yang S, Wang L. HMT-Net: Transformer and MLP Hybrid Encoder for Skin Disease Segmentation. SENSORS (BASEL, SWITZERLAND) 2023; 23:3067. [PMID: 36991777 PMCID: PMC10051843 DOI: 10.3390/s23063067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 03/02/2023] [Accepted: 03/08/2023] [Indexed: 06/19/2023]
Abstract
At present, convolutional neural networks (CNNs) have been widely applied to the task of skin disease image segmentation due to the fact of their powerful information discrimination abilities and have achieved good results. However, it is difficult for CNNs to capture the connection between long-range contexts when extracting deep semantic features of lesion images, and the resulting semantic gap leads to the problem of segmentation blur in skin lesion image segmentation. In order to solve the above problems, we designed a hybrid encoder network based on transformer and fully connected neural network (MLP) architecture, and we call this approach HMT-Net. In the HMT-Net network, we use the attention mechanism of the CTrans module to learn the global relevance of the feature map to improve the network's ability to understand the overall foreground information of the lesion. On the other hand, we use the TokMLP module to effectively enhance the network's ability to learn the boundary features of lesion images. In the TokMLP module, the tokenized MLP axial displacement operation strengthens the connection between pixels to facilitate the extraction of local feature information by our network. In order to verify the superiority of our network in segmentation tasks, we conducted extensive experiments on the proposed HMT-Net network and several newly proposed Transformer and MLP networks on three public datasets (ISIC2018, ISBI2017, and ISBI2016) and obtained the following results. Our method achieves 82.39%, 75.53%, and 83.98% on the Dice index and 89.35%, 84.93%, and 91.33% on the IOU. Compared with the latest skin disease segmentation network, FAC-Net, our method improves the Dice index by 1.99%, 1.68%, and 1.6%, respectively. In addition, the IOU indicators have increased by 0.45%, 2.36%, and 1.13%, respectively. The experimental results show that our designed HMT-Net achieves state-of-the-art performance superior to other segmentation methods.
Collapse
Affiliation(s)
| | - Liejun Wang
- College of Information Science and Engineering, Xinjiang University, Urumqi 830046, China
| |
Collapse
|
69
|
Qiu S, Li C, Feng Y, Zuo S, Liang H, Xu A. GFANet: Gated Fusion Attention Network for skin lesion segmentation. Comput Biol Med 2023; 155:106462. [PMID: 36857942 DOI: 10.1016/j.compbiomed.2022.106462] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 12/13/2022] [Accepted: 12/19/2022] [Indexed: 02/21/2023]
Abstract
Automatic segmentation of skin lesions is crucial for diagnosing and treating skin diseases. Although current medical image segmentation methods have significantly improved the results of skin lesion segmentation, the following major challenges still affect the segmentation performance: (i) segmentation targets have irregular shapes and diverse sizes and (ii) low contrast or blurred boundaries between lesions and background. To address these issues, this study proposes a Gated Fusion Attention Network (GFANet) which designs two progressive relation decoders to accurately segment skin lesions images. First, we use a Context Features Gated Fusion Decoder (CGFD) to fuse multiple levels of contextual features, and then a prediction result is generated as the initial guide map. Then, it is optimized by a prediction decoder consisting of a shape flow and a final Gated Convolution Fusion (GCF) module, where we iteratively use a set of Channel Reverse Attention (CRA) modules and GCF modules in the shape flow to combine the features of the current layer and the prediction results of the adjacent next layer to gradually extract boundary information. Finally, to speed up network convergence and improve segmentation accuracy, we use GCF to fuse low-level features from the encoder and the final output of the shape flow. To verify the effectiveness and advantages of the proposed GFANet, we conduct extensive experiments on four publicly available skin lesion datasets (International Skin Imaging Collaboration [ISIC] 2016, ISIC 2017, ISIC 2018, and PH2) and compare them with state-of-the-art methods. The experimental results show that the proposed GFANet achieves excellent segmentation performance in commonly used evaluation metrics, and the segmentation results are stable. The source code is available at https://github.com/ShiHanQ/GFANet.
Collapse
Affiliation(s)
- Shihan Qiu
- Department of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, China
| | - Chengfei Li
- Department of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, China.
| | - Yue Feng
- Department of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, China
| | - Song Zuo
- Department of Hemangioma and Vascular Malformation, Henan Provincial People's Hospital, People's Hospital of Zhengzhou University, Zhengzhou, Henan, 450003, China.
| | - Huijie Liang
- Department of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, China
| | - Ao Xu
- Department of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, China
| |
Collapse
|
70
|
Jiang Y, Dong J, Zhang Y, Cheng T, Lin X, Liang J. PCF-Net: Position and context information fusion attention convolutional neural network for skin lesion segmentation. Heliyon 2023; 9:e13942. [PMID: 36923881 PMCID: PMC10009446 DOI: 10.1016/j.heliyon.2023.e13942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 02/10/2023] [Accepted: 02/15/2023] [Indexed: 02/27/2023] Open
Abstract
Skin lesion segmentation is a crucial step in the process of skin cancer diagnosis and treatment. The variation in position, shape, size and edges of skin lesion areas poses a challenge for accurate segmentation of skin lesion areas through dermoscopic images. To meet these challenges, in this paper, using UNet as the baseline model, a convolutional neural network based on position and context information fusion attention is proposed, called PCF-Net. A novel two-branch attention mechanism is designed to aggregate Position and Context information, called Position and Context Information Aggregation Attention Module (PCFAM). A global context information complementary module (GCCM) was developed to obtain long-range dependencies. A multi-scale grouped dilated convolution feature extraction module (MSEM) was proposed to capture multi-scale feature information and place it in the bottleneck of UNet. On the ISIC2018 dataset, a large volume of ablation experiments demonstrated the superiority of PCF-Net for dermoscopic image segmentation after adding PCFAM, GCCM and MSEM. Compared with other state-of-the-art methods, the performance of PCF-Net achieves a competitive result in all metrics.
Collapse
Affiliation(s)
- Yun Jiang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| | - Jinkun Dong
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| | - Yuan Zhang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| | - Tongtong Cheng
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| | - Xin Lin
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| | - Jing Liang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| |
Collapse
|
71
|
Adepu AK, Sahayam S, Jayaraman U, Arramraju R. Melanoma classification from dermatoscopy images using knowledge distillation for highly imbalanced data. Comput Biol Med 2023; 154:106571. [PMID: 36709518 DOI: 10.1016/j.compbiomed.2023.106571] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Revised: 12/17/2022] [Accepted: 01/22/2023] [Indexed: 01/26/2023]
Abstract
Melanoma is a deadly malignant skin cancer that generally grows and spreads rapidly. Early detection of melanoma can improve the prognosis of a patient. However, large-scale screening for melanoma is arduous due to human error and the unavailability of trained experts. Accurate automatic melanoma classification from dermoscopy images can help mitigate such issues. However, the classification task is challenging due to class-imbalance, high inter-class, and low intra-class similarity problems. It results in poor sensitivity scores when it comes to the disease classification task. The work proposes a novel knowledge-distilled lightweight Deep-CNN-based framework for melanoma classification to tackle the high inter-class and low intra-class similarity problems. To handle the high class-imbalance problem, the work proposes using Cost-Sensitive Learning with Focal Loss, to achieve better sensitivity scores. As a pre-processing step, an in-painting algorithm is used to remove artifacts from dermoscopy images. New CutOut variants, namely, Sprinkled and microscopic Cutout augmentations, have been employed as regularizers to avoid over-fitting. The robustness of the model has been studied through stratified K-fold cross-validation. Ablation studies with test time augmentation (TTA) and the addition of various noises like salt & pepper, pepper-only, and Gaussian noises have been studied. All the models trained in the work have been evaluated on the SIIM-ISIC Melanoma Classification Challenge - ISIC-2020 dataset. With our EfficientNet-B5 (FL) teacher model, the EfficientNet-B2 student model achieved an Area under the Curve (AUC) of 0.9295, and a sensitivity of 0.8087 on the ISIC-2020 test data. The sensitivity value of 0.8087 for melanoma classification is the current state-of-the-art result in the literature for the ISIC-2020 dataset which is a significant 49.48% increase from the best non-distilled standalone model, EfficientNet B5 (FL) teacher with 0.5410.
Collapse
Affiliation(s)
- Anil Kumar Adepu
- Department of Computer Science and Engineering, Indian Institute of Information Technology Design and Manufacturing Kancheepuram, Chennai 600127 , Tamil Nadu, India.
| | - Subin Sahayam
- Department of Computer Science and Engineering, Indian Institute of Information Technology Design and Manufacturing Kancheepuram, Chennai 600127 , Tamil Nadu, India.
| | - Umarani Jayaraman
- Department of Computer Science and Engineering, Indian Institute of Information Technology Design and Manufacturing Kancheepuram, Chennai 600127 , Tamil Nadu, India.
| | - Rashmika Arramraju
- Apollo Institute of Medical Sciences and Research, Hyderabad 500096, Telangana, India.
| |
Collapse
|
72
|
Phan DT, Ta QB, Ly CD, Nguyen CH, Park S, Choi J, Se HO, Oh J. Smart Low Level Laser Therapy System for Automatic Facial Dermatological Disorder Diagnosis. IEEE J Biomed Health Inform 2023; 27:1546-1557. [PMID: 37021858 DOI: 10.1109/jbhi.2023.3237875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Computer-aided diagnosis using dermoscopy images is a promising technique for improving the efficiency of facial skin disorder diagnosis and treatment. Hence, in this study, we propose a low-level laser therapy (LLLT) system with a deep neural network and medical internet of things (MIoT) assistance. The main contributions of this study are to (1) provide a comprehensive hardware and software design for an automatic phototherapy system, (2) propose a modified-U2Net deep learning model for facial dermatological disorder segmentation, and (3) develop a synthetic data generation process for the proposed models to address the issue of the limited and imbalanced dataset. Finally, a MIoT-assisted LLLT platform for remote healthcare monitoring and management is proposed. The trained U2-Net model achieved a better performance on untrained dataset than other recent models, with an average Accuracy of 97.5%, Jaccard index of 74.7%, and Dice coefficient of 80.6%. The experimental results demonstrated that our proposed LLLT system can accurately segment facial skin diseases and automatically apply for phototherapy. The integration of artificial intelligence and MIoT-based healthcare platforms is a significant step toward the development of medical assistant tools in the near future.
Collapse
|
73
|
Liu Z, Xiong R, Jiang T. CI-Net: Clinical-Inspired Network for Automated Skin Lesion Recognition. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:619-632. [PMID: 36279355 DOI: 10.1109/tmi.2022.3215547] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The lesion recognition of dermoscopy images is significant for automated skin cancer diagnosis. Most of the existing methods ignore the medical perspective, which is crucial since this task requires a large amount of medical knowledge. A few methods are designed according to medical knowledge, but they ignore to be fully in line with doctors' entire learning and diagnosis process, since certain strategies and steps of those are conducted in practice for doctors. Thus, we put forward Clinical-Inspired Network (CI-Net) to involve the learning strategy and diagnosis process of doctors, as for a better analysis. The diagnostic process contains three main steps: the zoom step, the observe step and the compare step. To simulate these, we introduce three corresponding modules: a lesion area attention module, a feature extraction module and a lesion feature attention module. To simulate the distinguish strategy, which is commonly used by doctors, we introduce a distinguish module. We evaluate our proposed CI-Net on six challenging datasets, including ISIC 2016, ISIC 2017, ISIC 2018, ISIC 2019, ISIC 2020 and PH2 datasets, and the results indicate that CI-Net outperforms existing work. The code is publicly available at https://github.com/lzh19961031/Dermoscopy_classification.
Collapse
|
74
|
Wang Y, Su J, Xu Q, Zhong Y. A Collaborative Learning Model for Skin Lesion Segmentation and Classification. Diagnostics (Basel) 2023; 13:diagnostics13050912. [PMID: 36900056 PMCID: PMC10001355 DOI: 10.3390/diagnostics13050912] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 02/19/2023] [Accepted: 02/24/2023] [Indexed: 03/06/2023] Open
Abstract
The automatic segmentation and classification of skin lesions are two essential tasks in computer-aided skin cancer diagnosis. Segmentation aims to detect the location and boundary of the skin lesion area, while classification is used to evaluate the type of skin lesion. The location and contour information of lesions provided by segmentation is essential for the classification of skin lesions, while the skin disease classification helps generate target localization maps to assist the segmentation task. Although the segmentation and classification are studied independently in most cases, we find meaningful information can be explored using the correlation of dermatological segmentation and classification tasks, especially when the sample data are insufficient. In this paper, we propose a collaborative learning deep convolutional neural networks (CL-DCNN) model based on the teacher-student learning method for dermatological segmentation and classification. To generate high-quality pseudo-labels, we provide a self-training method. The segmentation network is selectively retrained through classification network screening pseudo-labels. Specially, we obtain high-quality pseudo-labels for the segmentation network by providing a reliability measure method. We also employ class activation maps to improve the location ability of the segmentation network. Furthermore, we provide the lesion contour information by using the lesion segmentation masks to improve the recognition ability of the classification network. Experiments are carried on the ISIC 2017 and ISIC Archive datasets. The CL-DCNN model achieved a Jaccard of 79.1% on the skin lesion segmentation task and an average AUC of 93.7% on the skin disease classification task, which is superior to the advanced skin lesion segmentation methods and classification methods.
Collapse
Affiliation(s)
- Ying Wang
- School of Information Science and Engineering, University of Jinan, Jinan 250022, China
- Shandong Provincial Key Laboratory of Network Based Intelligent Computing, University of Jinan, Jinan 250022, China
| | - Jie Su
- School of Information Science and Engineering, University of Jinan, Jinan 250022, China
- Shandong Provincial Key Laboratory of Network Based Intelligent Computing, University of Jinan, Jinan 250022, China
- Correspondence: ; Tel.: +86-15054125550
| | - Qiuyu Xu
- School of Information Science and Engineering, University of Jinan, Jinan 250022, China
- Shandong Provincial Key Laboratory of Network Based Intelligent Computing, University of Jinan, Jinan 250022, China
| | - Yixin Zhong
- School of Information Science and Engineering, University of Jinan, Jinan 250022, China
- Artificial Intelligence Research Institute, University of Jinan, Jinan 250022, China
| |
Collapse
|
75
|
Medical Image Classifications for 6G IoT-Enabled Smart Health Systems. Diagnostics (Basel) 2023; 13:diagnostics13050834. [PMID: 36899978 PMCID: PMC10000954 DOI: 10.3390/diagnostics13050834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 02/03/2023] [Accepted: 02/19/2023] [Indexed: 02/24/2023] Open
Abstract
As day-to-day-generated data become massive in the 6G-enabled Internet of medical things (IoMT), the process of medical diagnosis becomes critical in the healthcare system. This paper presents a framework incorporated into the 6G-enabled IoMT to improve prediction accuracy and provide a real-time medical diagnosis. The proposed framework integrates deep learning and optimization techniques to render accurate and precise results. The medical computed tomography images are preprocessed and fed into an efficient neural network designed for learning image representations and converting each image to a feature vector. The extracted features from each image are then learned using a MobileNetV3 architecture. Furthermore, we enhanced the performance of the arithmetic optimization algorithm (AOA) based on the hunger games search (HGS). In the developed method, named AOAHG, the operators of the HGS are applied to enhance the AOA's exploitation ability while allocating the feasible region. The developed AOAG selects the most relevant features and ensures the overall model classification improvement. To assess the validity of our framework, we conducted evaluation experiments on four datasets, including ISIC-2016 and PH2 for skin cancer detection, white blood cell (WBC) detection, and optical coherence tomography (OCT) classification, using different evaluation metrics. The framework showed remarkable performance compared to currently existing methods in the literature. In addition, the developed AOAHG provided results better than other FS approaches according to the obtained accuracy, precision, recall, and F1-score as performance measures. For example, AOAHG had 87.30%, 96.40%, 88.60%, and 99.69% for the ISIC, PH2, WBC, and OCT datasets, respectively.
Collapse
|
76
|
Bonechi S. ISIC_WSM: Generating Weak Segmentation Maps for the ISIC archive. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2022.12.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
77
|
The Role of Machine Learning and Deep Learning Approaches for the Detection of Skin Cancer. Healthcare (Basel) 2023; 11:healthcare11030415. [PMID: 36766989 PMCID: PMC9914395 DOI: 10.3390/healthcare11030415] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 01/28/2023] [Accepted: 01/29/2023] [Indexed: 02/04/2023] Open
Abstract
Machine learning (ML) can enhance a dermatologist's work, from diagnosis to customized care. The development of ML algorithms in dermatology has been supported lately regarding links to digital data processing (e.g., electronic medical records, Image Archives, omics), quicker computing and cheaper data storage. This article describes the fundamentals of ML-based implementations, as well as future limits and concerns for the production of skin cancer detection and classification systems. We also explored five fields of dermatology using deep learning applications: (1) the classification of diseases by clinical photos, (2) der moto pathology visual classification of cancer, and (3) the measurement of skin diseases by smartphone applications and personal tracking systems. This analysis aims to provide dermatologists with a guide that helps demystify the basics of ML and its different applications to identify their possible challenges correctly. This paper surveyed studies on skin cancer detection using deep learning to assess the features and advantages of other techniques. Moreover, this paper also defined the basic requirements for creating a skin cancer detection application, which revolves around two main issues: the full segmentation image and the tracking of the lesion on the skin using deep learning. Most of the techniques found in this survey address these two problems. Some of the methods also categorize the type of cancer too.
Collapse
|
78
|
Cheng L, Luo S, Li B, Liu R, Zhang Y, Zhang H. Multiple-instance learning for EEG based OSA event detection. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
79
|
Yanagisawa Y, Shido K, Kojima K, Yamasaki K. Convolutional neural network-based skin image segmentation model to improve classification of skin diseases in conventional and non-standardized picture images. J Dermatol Sci 2023; 109:30-36. [PMID: 36658056 DOI: 10.1016/j.jdermsci.2023.01.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 12/07/2022] [Accepted: 01/10/2023] [Indexed: 01/13/2023]
Abstract
BACKGROUND For dermatological practices, non-standardized conventional photo images are taken and collected as a mixture of variable fields of the image view, including close-up images focusing on designated lesions and long-shot images including normal skin and background of the body surface. Computer-aided detection/diagnosis (CAD) models trained using non-standardized conventional photo images exhibit lower performance rates than CAD models that detect lesions in a localized small area, such as dermoscopic images. OBJECTIVE We aimed to develop a convolutional neural network (CNN) model for skin image segmentation to generate a skin disease image dataset suitable for CAD of multiple skin disease classification. METHODS We trained a DeepLabv3 + -based CNN segmentation model to detect skin and lesion areas and segmented out areas that satisfy the following conditions: more than 80% of the image will be the skin area, and more than 10% of the image will be the lesion area. RESULTS The generated CNN-segmented image database was examined using CAD of skin disease classification and achieved approximately 90% sensitivity and specificity to differentiate atopic dermatitis from malignant diseases and complications, such as mycosis fungoides, impetigo, and herpesvirus infection. The accuracy of skin disease classification in the CNN-segmented image dataset was almost equal to that of the manually cropped image dataset and higher than that of the original image dataset. CONCLUSION Our CNN segmentation model, which automatically extracts lesions and segmented images of the skin area regardless of image fields, will reduce the burden of physician annotation and improve CAD performance.
Collapse
Affiliation(s)
| | - Kosuke Shido
- Department of Dermatology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Kaname Kojima
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan.
| | - Kenshi Yamasaki
- Department of Dermatology, Tohoku University Graduate School of Medicine, Sendai, Japan.
| |
Collapse
|
80
|
Bai R, Zhou M. SL-HarDNet: Skin lesion segmentation with HarDNet. Front Bioeng Biotechnol 2023; 10:1028690. [PMID: 36686227 PMCID: PMC9849244 DOI: 10.3389/fbioe.2022.1028690] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 12/16/2022] [Indexed: 01/06/2023] Open
Abstract
Automatic segmentation of skin lesions from dermoscopy is of great significance for the early diagnosis of skin cancer. However, due to the complexity and fuzzy boundary of skin lesions, automatic segmentation of skin lesions is a challenging task. In this paper, we present a novel skin lesion segmentation network based on HarDNet (SL-HarDNet). We adopt HarDNet as the backbone, which can learn more robust feature representation. Furthermore, we introduce three powerful modules, including: cascaded fusion module (CFM), spatial channel attention module (SCAM) and feature aggregation module (FAM). Among them, CFM combines the features of different levels and effectively aggregates the semantic and location information of skin lesions. SCAM realizes the capture of key spatial information. The cross-level features are effectively fused through FAM, and the obtained high-level semantic position information features are reintegrated with the features from CFM to improve the segmentation performance of the model. We apply the challenge dataset ISIC-2016&PH2 and ISIC-2018, and extensively evaluate and compare the state-of-the-art skin lesion segmentation methods. Experiments show that our SL-HarDNet performance is always superior to other segmentation methods and achieves the latest performance.
Collapse
Affiliation(s)
- Ruifeng Bai
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Mingwei Zhou
- Department of Dermatology, China-Japan Union Hospital of Jilin University, Changchun, China
| |
Collapse
|
81
|
Zafar M, Sharif MI, Sharif MI, Kadry S, Bukhari SAC, Rauf HT. Skin Lesion Analysis and Cancer Detection Based on Machine/Deep Learning Techniques: A Comprehensive Survey. Life (Basel) 2023; 13:146. [PMID: 36676093 PMCID: PMC9864434 DOI: 10.3390/life13010146] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 12/25/2022] [Accepted: 12/28/2022] [Indexed: 01/06/2023] Open
Abstract
The skin is the human body's largest organ and its cancer is considered among the most dangerous kinds of cancer. Various pathological variations in the human body can cause abnormal cell growth due to genetic disorders. These changes in human skin cells are very dangerous. Skin cancer slowly develops over further parts of the body and because of the high mortality rate of skin cancer, early diagnosis is essential. The visual checkup and the manual examination of the skin lesions are very tricky for the determination of skin cancer. Considering these concerns, numerous early recognition approaches have been proposed for skin cancer. With the fast progression in computer-aided diagnosis systems, a variety of deep learning, machine learning, and computer vision approaches were merged for the determination of medical samples and uncommon skin lesion samples. This research provides an extensive literature review of the methodologies, techniques, and approaches applied for the examination of skin lesions to date. This survey includes preprocessing, segmentation, feature extraction, selection, and classification approaches for skin cancer recognition. The results of these approaches are very impressive but still, some challenges occur in the analysis of skin lesions because of complex and rare features. Hence, the main objective is to examine the existing techniques utilized in the discovery of skin cancer by finding the obstacle that helps researchers contribute to future research.
Collapse
Affiliation(s)
- Mehwish Zafar
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47040, Pakistan
| | - Muhammad Imran Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47040, Pakistan
| | - Muhammad Irfan Sharif
- Department of Computer Science, University of Education, Jauharabad Campus, Khushāb 41200, Pakistan
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Artificial Intelligence Research Center (AIRC), Ajman University, Ajman P.O. Box 346, United Arab Emirates
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 13-5053, Lebanon
| | - Syed Ahmad Chan Bukhari
- Division of Computer Science, Mathematics and Science, Collins College of Professional Studies, St. John’s University, Queens, NY 11439, USA
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
| |
Collapse
|
82
|
Ding Y, Yi Z, Li M, long J, Lei S, Guo Y, Fan P, Zuo C, Wang Y. HI-MViT: A lightweight model for explainable skin disease classification based on modified MobileViT. Digit Health 2023; 9:20552076231207197. [PMID: 37846401 PMCID: PMC10576942 DOI: 10.1177/20552076231207197] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 09/26/2023] [Indexed: 10/18/2023] Open
Abstract
Objective To develop an explainable lightweight skin disease high-precision classification model that can be deployed to the mobile terminal. Methods In this study, we present HI-MViT, a lightweight network for explainable skin disease classification based on Modified MobileViT. HI-MViT is mainly composed of ordinary convolution, Improved-MV2, MobileViT block, global pooling, and fully connected layers. Improved-MV2 uses the combination of shortcut and depth classifiable convolution to substantially decrease the amount of computation while ensuring the efficient implementation of information interaction and memory. The MobileViT block can efficiently encode local and global information. In addition, semantic feature dimensionality reduction visualization and class activation mapping visualization methods are used for HI-MViT to further understand the attention area of the model when learning skin lesion images. Results The International Skin Imaging Collaboration has assembled and made available the ISIC series dataset. Experiments using the HI-MViT model on the ISIC-2018 dataset achieved scores of 0.931, 0.932, 0.961, and 0.977 on F1-Score, Accuracy, Average Precision (AP), and area under the curve (AUC). Compared with the top five algorithms of ISIC-2018 Task 3, Marco's average F1-Score, AP, and AUC have increased by 6.9%, 6.8%, and 0.8% compared with the suboptimal performance model. Compared with ConvNeXt, the most competitive convolutional neural network architecture, our model is 5.0%, 3.4%, 2.3%, and 2.2% higher in F1-Score, Accuracy, AP, and AUC, respectively. The experiments on the ISIC-2017 dataset also achieved excellent results, and all indicators were better than the top five algorithms of ISIC-2017 Task 3. Using the trained model to test on the PH2 dataset, an excellent performance score is obtained, which shows that it has good generalization performance. Conclusions The skin disease classification model HI-MViT proposed in this article shows excellent classification performance and generalization performance in experiments. It demonstrates how the classification outcomes can be applied to dermatologists' computer-assisted diagnostics, enabling medical professionals to classify various dermoscopic images more rapidly and reliably.
Collapse
Affiliation(s)
- Yuhan Ding
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
- School of Computer Science and Engineering, Central South University, Changsha, Hunan, China
| | - Zhenglin Yi
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
- Departments of Urology, Xiangya Hospital, Central South University, Changsha, China
| | - Mengjuan Li
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Jianhong long
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Shaorong Lei
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Yu Guo
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Pengju Fan
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Chenchen Zuo
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Yongjie Wang
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| |
Collapse
|
83
|
Extension-contraction transformation network for pancreas segmentation in abdominal CT scans. Comput Biol Med 2023; 152:106410. [PMID: 36516578 DOI: 10.1016/j.compbiomed.2022.106410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 11/08/2022] [Accepted: 12/03/2022] [Indexed: 12/12/2022]
Abstract
Accurate and automatic pancreas segmentation from abdominal computed tomography (CT) scans is crucial for the diagnosis and prognosis of pancreatic diseases. However, the pancreas accounts for a relatively small portion of the scan and presents high anatomical variability and low contrast, making traditional automated segmentation methods fail to generate satisfactory results. In this paper, we propose an extension-contraction transformation network (ECTN) and deploy it into a cascaded two-stage segmentation framework for accurate pancreas segmenting. This model can enhance the perception of 3D context by distinguishing and exploiting the extension and contraction transformation of the pancreas between slices. It consists of an encoder, a segmentation decoder, and an extension-contraction (EC) decoder. The EC decoder is responsible for predicting the inter-slice extension and contraction transformation of the pancreas by feeding the extension and contraction information generated by the segmentation decoder; meanwhile, its output is combined with the output of the segmentation decoder to reconstruct and refine the segmentation results. Quantitative evaluation is performed on NIH Pancreas Segmentation (Pancreas-CT) dataset using 4-fold cross-validation. We obtained average Precision of 86.59±6.14% , Recall of 85.11±5.96%, Dice similarity coefficient (DSC) of 85.58±3.98%. and Jaccard Index (JI) of 74.99±5.86%. The performance of our method outperforms several baseline and state-of-the-art methods.
Collapse
|
84
|
Magdy A, Hussein H, Abdel-Kader RF, Salam KAE. Performance Enhancement of Skin Cancer Classification Using Computer Vision. IEEE ACCESS 2023; 11:72120-72133. [DOI: 10.1109/access.2023.3294974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Affiliation(s)
- Ahmed Magdy
- Electrical Engineering Department, Suez Canal University, Ismailia, Egypt
| | - Hadeer Hussein
- Electrical Engineering Department, Suez Canal University, Ismailia, Egypt
| | | | | |
Collapse
|
85
|
A Novel Framework for Melanoma Lesion Segmentation Using Multiparallel Depthwise Separable and Dilated Convolutions with Swish Activations. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:1847115. [PMID: 36794097 PMCID: PMC9925248 DOI: 10.1155/2023/1847115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 08/16/2022] [Accepted: 11/24/2022] [Indexed: 02/08/2023]
Abstract
Skin cancer remains one of the deadliest kinds of cancer, with a survival rate of about 18-20%. Early diagnosis and segmentation of the most lethal kind of cancer, melanoma, is a challenging and critical task. To diagnose medicinal conditions of melanoma lesions, different researchers proposed automatic and traditional approaches to accurately segment the lesions. However, visual similarity among lesions and intraclass differences are very high, which leads to low-performance accuracy. Furthermore, traditional segmentation algorithms often require human inputs and cannot be utilized in automated systems. To address all of these issues, we provide an improved segmentation model based on depthwise separable convolutions that act on each spatial dimension of the image to segment the lesions. The fundamental idea behind these convolutions is to divide the feature learning steps into two simpler parts that are spatial learning of features and a step for channel combination. Besides this, we employ parallel multidilated filters to encode multiple parallel features and broaden the view of filters with dilations. Moreover, for performance evaluation, the proposed approach is evaluated on three different datasets including DermIS, DermQuest, and ISIC2016. The finding indicates that the suggested segmentation model has achieved the Dice score of 97% for DermIS and DermQuest and 94.7% for the ISBI2016 dataset, respectively.
Collapse
|
86
|
A comprehensive analysis of dermoscopy images for melanoma detection via deep CNN features. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104186] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
87
|
An Ensemble of Transfer Learning Models for the Prediction of Skin Cancers with Conditional Generative Adversarial Networks. Diagnostics (Basel) 2022; 12:diagnostics12123145. [PMID: 36553152 PMCID: PMC9777332 DOI: 10.3390/diagnostics12123145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 12/04/2022] [Accepted: 12/07/2022] [Indexed: 12/15/2022] Open
Abstract
Skin cancer is one of the most severe forms of the disease, and it can spread to other parts of the body if not detected early. Therefore, diagnosing and treating skin cancer patients at an early stage is crucial. Since a manual skin cancer diagnosis is both time-consuming and expensive, an incorrect diagnosis is made due to the high similarity between the various skin cancers. Improved categorization of multiclass skin cancers requires the development of automated diagnostic systems. Herein, we propose a fully automatic method for classifying several skin cancers by fine-tuning the deep learning models VGG16, ResNet50, and ResNet101. Prior to model creation, the training dataset should undergo data augmentation using traditional image transformation techniques and Generative Adversarial Networks (GANs) to prevent class imbalance issues that may lead to model overfitting. In this study, we investigate the feasibility of creating dermoscopic images that have a realistic appearance using Conditional Generative Adversarial Network (CGAN) techniques. Thereafter, the traditional augmentation methods are used to augment our existing training set to improve the performance of pre-trained deep models on the skin cancer classification task. This improved performance is then compared to the models developed using the unbalanced dataset. In addition, we formed an ensemble of finely tuned transfer learning models, which we trained on balanced and unbalanced datasets. These models were used to make predictions about the data. With appropriate data augmentation, the proposed models attained an accuracy of 92% for VGG16, 92% for ResNet50, and 92.25% for ResNet101, respectively. The ensemble of these models increased the accuracy to 93.5%. A comprehensive discussion on the performance of the models concluded that using this method possibly leads to enhanced performance in skin cancer categorization compared to the efforts made in the past.
Collapse
|
88
|
Zhang Y, Xie F, Song X, Zhou H, Yang Y, Zhang H, Liu J. A rotation meanout network with invariance for dermoscopy image classification and retrieval. Comput Biol Med 2022; 151:106272. [PMID: 36368111 DOI: 10.1016/j.compbiomed.2022.106272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 10/07/2022] [Accepted: 10/30/2022] [Indexed: 11/07/2022]
Abstract
The computer-aided diagnosis (CAD) system can provide a reference basis for the clinical diagnosis of skin diseases. Convolutional neural networks (CNNs) can not only extract visual elements such as colors and shapes but also semantic features. As such they have made great improvements in many tasks of dermoscopy images. The imaging of dermoscopy has no principal orientation, indicating that there are a large number of skin lesion rotations in the datasets. However, CNNs lack rotation invariance, which is bound to affect the robustness of CNNs against rotations. To tackle this issue, we propose a rotation meanout (RM) network to extract rotation-invariant features from dermoscopy images. In RM, each set of rotated feature maps corresponds to a set of outputs of the weight-sharing convolutions and they are fused using meanout strategy to obtain the final feature maps. Through theoretical derivation, the proposed RM network is rotation-equivariant and can extract rotation-invariant features when followed by the global average pooling (GAP) operation. The extracted rotation-invariant features can better represent the original data in classification and retrieval tasks for dermoscopy images. The RM is a general operation, which does not change the network structure or increase any parameters, and can be flexibly embedded in any part of CNNs. Extensive experiments are conducted on a dermoscopy image dataset. The results show that our method outperforms other anti-rotation methods and achieves great improvements in skin disease classification and retrieval tasks, indicating the potential of rotation invariance in the field of dermoscopy images.
Collapse
Affiliation(s)
- Yilan Zhang
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China
| | - Fengying Xie
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China.
| | - Xuedong Song
- Shanghai Aerospace Control Technology Institute, Shanghai 201109, China
| | - Hangning Zhou
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China
| | - Yiguang Yang
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China
| | - Haopeng Zhang
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China
| | - Jie Liu
- Department of Dermatology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| |
Collapse
|
89
|
Wang S, Yin Y, Wang D, Wang Y, Jin Y. Interpretability-Based Multimodal Convolutional Neural Networks for Skin Lesion Diagnosis. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:12623-12637. [PMID: 34546933 DOI: 10.1109/tcyb.2021.3069920] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Skin lesion diagnosis is a key step for skin cancer screening, which requires high accuracy and interpretability. Though many computer-aided methods, especially deep learning methods, have made remarkable achievements in skin lesion diagnosis, their generalization and interpretability are still a challenge. To solve this issue, we propose an interpretability-based multimodal convolutional neural network (IM-CNN), which is a multiclass classification model with skin lesion images and metadata of patients as input for skin lesion diagnosis. The structure of IM-CNN consists of three main paths to deal with metadata, features extracted from segmented skin lesion with domain knowledge, and skin lesion images, respectively. We add interpretable visual modules to provide explanations for both images and metadata. In addition to area under the ROC curve (AUC), sensitivity, and specificity, we introduce a new indicator, an AUC curve with a sensitivity larger than 80% (AUC_SEN_80) for performance evaluation. Extensive experimental studies are conducted on the popular HAM10000 dataset, and the results indicate that the proposed model has overwhelming advantages compared with popular deep learning models, such as DenseNet, ResNet, and other state-of-the-art models for melanoma diagnosis. The proposed multimodal model also achieves on average 72% and 21% improvement in terms of sensitivity and AUC_SEN_80, respectively, compared with the single-modal model. The visual explanations can also help gain trust from dermatologists and realize man-machine collaborations, effectively reducing the limitation of black-box models in supporting medical decision making.
Collapse
|
90
|
Ayas S. Multiclass skin lesion classification in dermoscopic images using swin transformer model. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-08053-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
91
|
Rehman M, Ali M, Obayya M, Asghar J, Hussain L, K. Nour M, Negm N, Mustafa Hilal A. Machine learning based skin lesion segmentation method with novel borders and hair removal techniques. PLoS One 2022; 17:e0275781. [PMID: 36355845 PMCID: PMC9648757 DOI: 10.1371/journal.pone.0275781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 09/12/2022] [Indexed: 11/12/2022] Open
Abstract
The effective segmentation of lesion(s) from dermoscopic skin images assists the Computer-Aided Diagnosis (CAD) systems in improving the diagnosing rate of skin cancer. The results of the existing skin lesion segmentation techniques are not up to the mark for dermoscopic images with artifacts like varying size corner borders with color similar to lesion(s) and/or hairs having low contrast with surrounding background. To improve the results of the existing skin lesion segmentation techniques for such kinds of dermoscopic images, an effective skin lesion segmentation method is proposed in this research work. The proposed method searches for the presence of corner borders in the given dermoscopc image and removes them if found otherwise it starts searching for the presence of hairs on it and eliminate them if present. Next, it enhances the resultant image using state-of-the-art image enhancement method and segments lesion from it using machine learning technique namely, GrabCut method. The proposed method was tested on PH2 and ISIC 2018 datasets containing 200 images each and its accuracy was measured with two evaluation metrics, i.e., Jaccard index, and Dice index. The evaluation results show that our proposed skin lesion segmentation method obtained Jaccard Index of 0.77, 0.80 and Dice index of 0.87, 0.82 values on PH2, and ISIC2018 datasets, respectively, which are better than state-of-the-art skin lesion segmentation techniques.
Collapse
Affiliation(s)
- Mohibur Rehman
- Department of Computer Science & Information Technology, Hazara University, Mansehra, Pakistan
| | - Mushtaq Ali
- Department of Computer Science & Information Technology, Hazara University, Mansehra, Pakistan
| | - Marwa Obayya
- Department of Biomedical Engineering, College of Engineering, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Junaid Asghar
- Faculty of Pharmacy, Gomal University, D I Khan, Pakistan
| | - Lal Hussain
- Department of Computer Science and Information Technology, King Abdullah Campus Chatter Kalas, University of Azad Jammu and Kashmir, Muzaffarabad, Azad Kashmir, Pakistan
- Department of Computer Science and Information Technology, Neelum Campus, University of Azad Jammu and Kashmir, Athmuqam, Azad Kashmir, Pakistan
| | - Mohamed K. Nour
- Department of Computer Sciences, College of Computing and Information System, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Noha Negm
- Department of Computer Science, College of Science & Art at Mahayil, King Khalid University, Abha, Saudi Arabia
| | - Anwer Mustafa Hilal
- Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
92
|
Qian S, Ren K, Zhang W, Ning H. Skin lesion classification using CNNs with grouping of multi-scale attention and class-specific loss weighting. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107166. [PMID: 36209623 DOI: 10.1016/j.cmpb.2022.107166] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 09/05/2022] [Accepted: 09/29/2022] [Indexed: 06/16/2023]
Abstract
As one of the most common cancers globally, the incidence of skin cancer has been rising. Dermoscopy-based classification has become the most effective method for the diagnosis of skin lesion types due to its accuracy and non-invasive characteristics, which plays a significant role in reducing mortality. Although a great breakthrough of the task of skin lesion classification has been made with the application of convolutional neural network, the inter-class similarity and intra-class variation in skin lesions images, the high class imbalance of the dataset and the lack of ability to focus on the lesion area all affect the classification results of the model. In order to solve these problems, on the one hand, we use the grouping of multi-scale attention blocks (GMAB) to extract multi-scale fine-grained features so as to improve the model's ability to focus on the lesion area. On the other hand, we adopt the method of class-specific loss weighting for the problem of category imbalance. In this paper, we propose a deep convolution neural network dermatoscopic image classification method based on the grouping of multi-scale attention blocks and class-specific loss weighting. We evaluated our model on the HAM10000 dataset, and the results showed that the ACC and AUC of the proposed method were 91.6% and 97.1% respectively, which can achieve good results in dermatoscopic classification tasks.
Collapse
Affiliation(s)
- Shenyi Qian
- Information management center, Zhengzhou University of Light Industry, Zhengzhou 450001, China.
| | - Kunpeng Ren
- School of Computer and Communication Engineering, Zhengzhou University of Light Industry, Zhengzhou 450001, China
| | - Weiwei Zhang
- School of Computer and Communication Engineering, Zhengzhou University of Light Industry, Zhengzhou 450001, China
| | - Haohan Ning
- School of Computer and Communication Engineering, Zhengzhou University of Light Industry, Zhengzhou 450001, China
| |
Collapse
|
93
|
Sadid SR, Kabir MS, Mahmud ST, Islam MS, Islam AHMW, Arafat MT. Segmenting 3D geometry of left coronary artery from coronary CT angiography using deep learning for hemodynamic evaluation. Biomed Phys Eng Express 2022; 8. [DOI: 10.1088/2057-1976/ac9e03] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 10/27/2022] [Indexed: 11/11/2022]
Abstract
Abstract
While coronary CT angiography (CCTA) is crucial for detecting several coronary artery diseases, it fails to provide essential hemodynamic parameters for early detection and treatment. These parameters can be easily obtained by performing computational fluid dynamic (CFD) analysis on the 3D artery geometry generated by CCTA image segmentation. As the coronary artery is small in size, manually segmenting the left coronary artery from CCTA scans is a laborious, time-intensive, error-prone, and complicated task which also requires a high level of expertise. Academics recently proposed various automated segmentation techniques for combatting these issues. To further aid in this process, we present CoronarySegNet, a deep learning-based framework, for autonomous and accurate segmentation as well as generation of 3D geometry of the left coronary artery. The design is based on the original U-net topology and includes channel-aware attention blocks as well as deep residual blocks with spatial dropout that contribute to feature map independence by eliminating 2D feature maps rather than individual components. We trained, tested, and statistically evaluated our model using CCTA images acquired from various medical centers across Bangladesh and the Rotterdam Coronary Artery Algorithm Evaluation challenge dataset to improve generality. In empirical assessment, CoronarySegNet outperforms several other cutting-edge segmentation algorithms, attaining dice similarity coefficient of 0.78 on an average while being highly significant (p < 0.05). Additionally, both the 3D geometries generated by machine learning and semi-automatic method were statistically similar. Moreover, hemodynamic evaluation performed on these 3D geometries showed comparable results.
Collapse
|
94
|
S M J, P M, Aravindan C, Appavu R. Classification of skin cancer from dermoscopic images using deep neural network architectures. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:15763-15778. [PMID: 36250184 PMCID: PMC9554840 DOI: 10.1007/s11042-022-13847-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 03/18/2022] [Accepted: 09/06/2022] [Indexed: 06/16/2023]
Abstract
A powerful medical decision support system for classifying skin lesions from dermoscopic images is an important tool to prognosis of skin cancer. In the recent years, Deep Convolutional Neural Network (DCNN) have made a significant advancement in detecting skin cancer types from dermoscopic images, in-spite of its fine grained variability in its appearance. The main objective of this research work is to develop a DCNN based model to automatically classify skin cancer types into melanoma and non-melanoma with high accuracy. The datasets used in this work were obtained from the popular challenges ISIC-2019 and ISIC-2020, which have different image resolutions and class imbalance problems. To address these two problems and to achieve high performance in classification we have used EfficientNet architecture based on transfer learning techniques, which learns more complex and fine grained patterns from lesion images by automatically scaling depth, width and resolution of the network. We have augmented our dataset to overcome the class imbalance problem and also used metadata information to improve the classification results. Further to improve the efficiency of the EfficientNet we have used ranger optimizer which considerably reduces the hyper parameter tuning, which is required to achieve state-of-the-art results. We have conducted several experiments using different transferring models and our results proved that EfficientNet variants outperformed in the skin lesion classification tasks when compared with other architectures. The performance of the proposed system was evaluated using Area under the ROC curve (AUC - ROC) and obtained the score of 0.9681 by optimal fine tuning of EfficientNet-B6 with ranger optimizer.
Collapse
Affiliation(s)
- Jaisakthi S M
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai Campus, Chennai, India
| | - Mirunalini P
- Department of Computer Science and Engineering, Sri Sivasubramaniya Nadar College of Engineering, Kalavakkam Chennai, India
| | - Chandrabose Aravindan
- Department of Computer Science and Engineering, Sri Sivasubramaniya Nadar College of Engineering, Kalavakkam Chennai, India
| | - Rajagopal Appavu
- Taneja College of Pharmacy, University of South Florida Health-Tampa, Tampa, FL USA
| |
Collapse
|
95
|
Xie N, Zhou H, Yu L, Huang S, Tian C, Li K, Jiang Y, Hu ZY, Ouyang Q. Artificial intelligence scale-invariant feature transform algorithm-based system to improve the calculation accuracy of Ki-67 index in invasive breast cancer: a multicenter retrospective study. ANNALS OF TRANSLATIONAL MEDICINE 2022; 10:1067. [PMID: 36330383 PMCID: PMC9622502 DOI: 10.21037/atm-22-4254] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 09/27/2022] [Indexed: 09/02/2023]
Abstract
BACKGROUND Ki-67 is a key indicator of the proliferation activity of tumors. However, no standardized criterion has been established for Ki-67 index calculation. Scale-invariant feature transform (SIFT) algorithm can identify the robust invariant features to rotation, translation, scaling and linear intensity changes for matching and registration in computer vision. Thus, this study aimed to develop a SIFT-based computer-aided system for Ki-67 calculation in breast cancer. METHODS Hematoxylin and eosin (HE)-stained and Ki-67-stained slides were scanned and whole slide images (WSIs) were obtained. The regions of breast cancer (BC) tissues and non-BC tissues were labeled by experienced pathologists. All the labeled WSIs were randomly divided into the training set, verification set, and test set according to a fixed ratio of 7:2:1. The algorithm for identification of cancerous regions was developed by a ResNet network. The registration process between paired consecutive HE-stained WSIs and Ki-67-stained WSIs was based on a pyramid model using the feature matching method of SIFT. After registration, we counted the nuclear-stained Ki-67-positive cells in each identified invasive cancerous region using color deconvolution. To assess the accuracy, the AI-assisted result for each slice was compared with the manual diagnosis result of pathologists. If the difference of the two positive rate values is not greater than 10%, it was a consistent result; otherwise, it was an inconsistent result. RESULTS The accuracy of the AI-based algorithm in identifying breast cancer tissues in HE-stained slides was 93%, with an area under the curve (AUC) of 0.98. After registration, we succeeded in identifying Ki-67-positive cells among cancerous cells across the entire WSIs and calculated the Ki-67 index, with an accuracy rate of 91.5%, compared to the gold standard pathological reports. Using this system, it took about 1 hour to complete the evaluation of all the tested 771 pairs of HE- and Ki-67-stained slides. Each Ki-67 result took less than 2 seconds. CONCLUSIONS Using a pyramid model and the SIFT feature matching method, we developed an AI-based automatic cancer identification and Ki-67 index calculation system, which could improve the accuracy of Ki-67 index calculation and make the data repeatable among different hospitals and centers.
Collapse
Affiliation(s)
- Ning Xie
- Medical Department of Breast Cancer, Hunan Cancer Hospital, Changsha, China
- Department of Breast Cancer Medical Oncology, the Affiliated Cancer Hospital of Xiangya Medical School, Central South University, Changsha, China
| | - Haoyu Zhou
- College of Information and Intelligence, Hunan Agricultural University, Changsha, China
| | - Li Yu
- Ningbo Lensee Intelligent Technology Co., Ltd., Ningbo, China
| | - Shaobing Huang
- Ningbo Lensee Intelligent Technology Co., Ltd., Ningbo, China
| | - Can Tian
- Medical Department of Breast Cancer, Hunan Cancer Hospital, Changsha, China
- Department of Breast Cancer Medical Oncology, the Affiliated Cancer Hospital of Xiangya Medical School, Central South University, Changsha, China
| | - Keyu Li
- Department of Respiratory Medicine, The First Hospital of Changsha City, Changsha, China
| | - Yi Jiang
- Department of Pathology, the Second Xiangya Hospital of Central South University, Changsha, China
| | - Zhe-Yu Hu
- Medical Department of Breast Cancer, Hunan Cancer Hospital, Changsha, China
- Department of Breast Cancer Medical Oncology, the Affiliated Cancer Hospital of Xiangya Medical School, Central South University, Changsha, China
| | - Quchang Ouyang
- Medical Department of Breast Cancer, Hunan Cancer Hospital, Changsha, China
- Department of Breast Cancer Medical Oncology, the Affiliated Cancer Hospital of Xiangya Medical School, Central South University, Changsha, China
| |
Collapse
|
96
|
Multi-Organ Segmentation Using a Low-Resource Architecture. INFORMATION 2022. [DOI: 10.3390/info13100472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Since their inception, deep-learning architectures have shown promising results for automatic segmentation. However, despite the technical advances introduced by fully convolutional networks, generative adversarial networks or recurrent neural networks, and their usage in hybrid architectures, automatic segmentation in the medical field is still not used at scale. One main reason is related to data scarcity and quality, which in turn generates a lack of annotated data that hinder the generalization of the models. The second main issue refers to challenges in training deep models. This process uses large amounts of GPU memory (that might exceed current hardware limitations) and requires high training times. In this article, we want to prove that despite these issues, good results can be obtained even when using a lower resource architecture, thus opening the way for more researchers to employ and use deep neural networks. In achieving the multi-organ segmentation, we are employing modern pre-processing techniques, a smart model design and fusion between several models trained on the same dataset. Our architecture is compared against state-of-the-art methods employed in a publicly available challenge and the notable results prove the effectiveness of our method.
Collapse
|
97
|
MSLANet: multi-scale long attention network for skin lesion classification. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03320-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
98
|
Elaziz MA, Dahou A, El-Sappagh S, Mabrouk A, Gaber MM. AHA-AO: Artificial Hummingbird Algorithm with Aquila Optimization for Efficient Feature Selection in Medical Image Classification. APPLIED SCIENCES 2022; 12:9710. [DOI: 10.3390/app12199710] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
This paper presents a system for medical image diagnosis that uses transfer learning (TL) and feature selection techniques. The main aim of TL on pre-trained models such as MobileNetV3 is to extract features from raw images. Here, a novel feature selection optimization algorithm called the Artificial Hummingbird Algorithm based on Aquila Optimization (AHA-AO) is proposed. The AHA-AO is used to select only the most relevant features and ensure the improvement of the overall model classification. Our methodology was evaluated using four datasets, namely, ISIC-2016, PH2, Chest-XRay, and Blood-Cell. We compared the proposed feature selection algorithm with five of the most popular feature selection optimization algorithms. We obtained an accuracy of 87.30% for the ISIC-2016 dataset, 97.50% for the PH2 dataset, 86.90% for the Chest-XRay dataset, and 88.60% for the Blood-cell dataset. The AHA-AO outperformed the other optimization techniques. Moreover, the developed AHA-AO was faster than the other feature selection models during the process of determining the relevant features. The proposed feature selection algorithm successfully improved the performance and the speed of the overall deep learning models.
Collapse
|
99
|
Li H, Li W, Chang J, Zhou L, Luo J, Guo Y. Dermoscopy lesion classification based on GANs and a fuzzy rank-based ensemble of CNN models. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac8b60] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 08/19/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Background and Objective. Skin lesion classification by using deep learning technologies is still a considerable challenge due to high similarity among classes and large intraclass differences, serious class imbalance in data, and poor classification accuracy with low robustness. Approach. To address these issues, a two-stage framework for dermoscopy lesion classification using adversarial training and a fuzzy rank-based ensemble of multilayer feature fusion convolutional neural network (CNN) models is proposed. In the first stage, dermoscopy dataset augmentation based on generative adversarial networks is proposed to obtain realistic dermoscopy lesion images, enabling significant improvement for balancing the number of lesions in each class. In the second stage, a fuzzy rank-based ensemble of multilayer feature fusion CNN models is proposed to classify skin lesions. In addition, an efficient channel integrating spatial attention module, in which a novel dilated pyramid pooling structure is designed to extract multiscale features from an enlarged receptive field and filter meaningful information of the initial features. Combining the cross-entropy loss function with the focal loss function, a novel united loss function is designed to reduce the intraclass sample distance and to focus on difficult and error-prone samples to improve the recognition accuracy of the proposed model. Main results. In this paper, the common dataset (HAM10000) is selected to conduct simulation experiments to evaluate and verify the effectiveness of the proposed method. The subjective and objective experimental results demonstrate that the proposed method is superior over the state-of-the-art methods for skin lesion classification due to its higher accuracy, specificity and robustness. Significance. The proposed method effectively improves the classification performance of the model for skin diseases, which will help doctors make accurate and efficient diagnoses, reduce the incidence rate and improve the survival rates of patients.
Collapse
|
100
|
Dong B, Fu X, Kang X. SSGNet: semi-supervised multi-path grid network for diagnosing melanoma. Pattern Anal Appl 2022. [DOI: 10.1007/s10044-022-01100-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|