201
|
Skin Lesion Segmentation by U-Net with Adaptive Skip Connection and Structural Awareness. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11104528] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Skin lesion segmentation is one of the pivotal stages in the diagnosis of melanoma. Many methods have been proposed but, to date, this is still a challenging task. Variations in size and color, the fuzzy boundary and the low contrast between lesion and normal skin are the adverse factors for deficient or excessive delineation of lesions, or even inaccurate lesion location detection. In this paper, to counter these problems, we introduce a deep learning method based on U-Net architecture, which performs three tasks, namely lesion segmentation, boundary distance map regression and contour detection. The two auxiliary tasks provide an awareness of boundary and shape to the main encoder, which improves the object localization and pixel-wise classification in the transition region from lesion tissues to healthy tissues. Moreover, concerning the large variation in size, the Selective Kernel modules, which are placed in the skip connections, transfer the multi-receptive field features from the encoder to the decoder. Our methods are evaluated on three publicly available datasets: ISBI2016, ISBI 2017 and PH2. The extensive experimental results show the effectiveness of the proposed method in the task of skin lesion segmentation.
Collapse
|
202
|
Al-Masni MA, Kim DH. CMM-Net: Contextual multi-scale multi-level network for efficient biomedical image segmentation. Sci Rep 2021; 11:10191. [PMID: 33986375 PMCID: PMC8119726 DOI: 10.1038/s41598-021-89686-3] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Accepted: 04/26/2021] [Indexed: 01/20/2023] Open
Abstract
Medical image segmentation of tissue abnormalities, key organs, or blood vascular system is of great significance for any computerized diagnostic system. However, automatic segmentation in medical image analysis is a challenging task since it requires sophisticated knowledge of the target organ anatomy. This paper develops an end-to-end deep learning segmentation method called Contextual Multi-Scale Multi-Level Network (CMM-Net). The main idea is to fuse the global contextual features of multiple spatial scales at every contracting convolutional network level in the U-Net. Also, we re-exploit the dilated convolution module that enables an expansion of the receptive field with different rates depending on the size of feature maps throughout the networks. In addition, an augmented testing scheme referred to as Inversion Recovery (IR) which uses logical "OR" and "AND" operators is developed. The proposed segmentation network is evaluated on three medical imaging datasets, namely ISIC 2017 for skin lesions segmentation from dermoscopy images, DRIVE for retinal blood vessels segmentation from fundus images, and BraTS 2018 for brain gliomas segmentation from MR scans. The experimental results showed superior state-of-the-art performance with overall dice similarity coefficients of 85.78%, 80.27%, and 88.96% on the segmentation of skin lesions, retinal blood vessels, and brain tumors, respectively. The proposed CMM-Net is inherently general and could be efficiently applied as a robust tool for various medical image segmentations.
Collapse
Affiliation(s)
- Mohammed A Al-Masni
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea.
| |
Collapse
|
203
|
Jiang S, Li H, Jin Z. A Visually Interpretable Deep Learning Framework for Histopathological Image-Based Skin Cancer Diagnosis. IEEE J Biomed Health Inform 2021; 25:1483-1494. [PMID: 33449890 DOI: 10.1109/jbhi.2021.3052044] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Owing to the high incidence rate and the severe impact of skin cancer, the precise diagnosis of malignant skin tumors is a significant goal, especially considering treatment is normally effective if the tumor is detected early. Limited published histopathological image sets and the lack of an intuitive correspondence between the features of lesion areas and a certain type of skin cancer pose a challenge to the establishment of high-quality and interpretable computer-aided diagnostic (CAD) systems. To solve this problem, a light-weight attention mechanism-based deep learning framework, namely, DRANet, is proposed to differentiate 11 types of skin diseases based on a real histopathological image set collected by us during the last 10 years. The CAD system can output not only the name of a certain disease but also a visualized diagnostic report showing possible areas related to the disease. The experimental results demonstrate that the DRANet obtains significantly better performance than baseline models (i.e., InceptionV3, ResNet50, VGG16, and VGG19) with comparable parameter size and competitive accuracy with fewer model parameters. Visualized results produced by the hidden layers of the DRANet actually highlight part of the class-specific regions of diagnostic points and are valuable for decision making in the diagnosis of skin diseases.
Collapse
|
204
|
Bagheri F, Tarokh MJ, Ziaratban M. Skin lesion segmentation from dermoscopic images by using Mask R-CNN, Retina-Deeplab, and graph-based methods. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102533] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
|
205
|
Mijwil MM. Skin cancer disease images classification using deep learning solutions. MULTIMEDIA TOOLS AND APPLICATIONS 2021. [DOI: 10.1007/s11042-021-10952-7] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Revised: 11/04/2020] [Accepted: 04/14/2021] [Indexed: 08/30/2023]
|
206
|
Abdar M, Samami M, Dehghani Mahmoodabad S, Doan T, Mazoure B, Hashemifesharaki R, Liu L, Khosravi A, Acharya UR, Makarenkov V, Nahavandi S. Uncertainty quantification in skin cancer classification using three-way decision-based Bayesian deep learning. Comput Biol Med 2021; 135:104418. [PMID: 34052016 DOI: 10.1016/j.compbiomed.2021.104418] [Citation(s) in RCA: 87] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2021] [Revised: 04/01/2021] [Accepted: 04/17/2021] [Indexed: 12/18/2022]
Abstract
Accurate automated medical image recognition, including classification and segmentation, is one of the most challenging tasks in medical image analysis. Recently, deep learning methods have achieved remarkable success in medical image classification and segmentation, clearly becoming the state-of-the-art methods. However, most of these methods are unable to provide uncertainty quantification (UQ) for their output, often being overconfident, which can lead to disastrous consequences. Bayesian Deep Learning (BDL) methods can be used to quantify uncertainty of traditional deep learning methods, and thus address this issue. We apply three uncertainty quantification methods to deal with uncertainty during skin cancer image classification. They are as follows: Monte Carlo (MC) dropout, Ensemble MC (EMC) dropout and Deep Ensemble (DE). To further resolve the remaining uncertainty after applying the MC, EMC and DE methods, we describe a novel hybrid dynamic BDL model, taking into account uncertainty, based on the Three-Way Decision (TWD) theory. The proposed dynamic model enables us to use different UQ methods and different deep neural networks in distinct classification phases. So, the elements of each phase can be adjusted according to the dataset under consideration. In this study, two best UQ methods (i.e., DE and EMC) are applied in two classification phases (the first and second phases) to analyze two well-known skin cancer datasets, preventing one from making overconfident decisions when it comes to diagnosing the disease. The accuracy and the F1-score of our final solution are, respectively, 88.95% and 89.00% for the first dataset, and 90.96% and 91.00% for the second dataset. Our results suggest that the proposed TWDBDL model can be used effectively at different stages of medical image analysis.
Collapse
Affiliation(s)
- Moloud Abdar
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, Australia.
| | - Maryam Samami
- Department of Computer Engineering, Sari Branch, Islamic Azad University, Sari, Iran
| | - Sajjad Dehghani Mahmoodabad
- Department of Artificial Intelligence, Faculty of Computer Engineering, K. N. Toosi University of Technology, Tehran, Iran
| | - Thang Doan
- Department of Computer Science, McGill University / Mila, Montreal, Canada
| | - Bogdan Mazoure
- Department of Computer Science, McGill University / Mila, Montreal, Canada
| | | | - Li Liu
- Center for Machine Vision and Signal Analysis (CMVS), University of Oulu, Finland
| | - Abbas Khosravi
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, Australia
| | - U Rajendra Acharya
- School of Engineering, Ngee Ann Polytechnic, Singapore; Department of Biomedical Engineering, Singapore University of Social Sciences, Singapore; Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| | - Vladimir Makarenkov
- Department of Computer Science, University of Quebec in Montreal, Montreal, Canada
| | - Saeid Nahavandi
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, Australia
| |
Collapse
|
207
|
Zanddizari H, Nguyen N, Zeinali B, Chang JM. A new preprocessing approach to improve the performance of CNN-based skin lesion classification. Med Biol Eng Comput 2021; 59:1123-1131. [PMID: 33904008 DOI: 10.1007/s11517-021-02355-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Accepted: 03/19/2021] [Indexed: 10/21/2022]
Abstract
Skin lesion is one of the severe diseases which in many cases endanger the lives of patients on a worldwide extent. Early detection of disease in dermoscopy images can significantly increase the survival rate. However, the accurate detection of disease is highly challenging due to the following reasons: e.g., visual similarity between different classes of disease (e.g., melanoma and non-melanoma lesions), low contrast between lesions and skin, background noise, and artifacts. Machine learning models based on convolutional neural networks (CNN) have been widely used for automatic recognition of lesion diseases with high accuracy in comparison to conventional machine learning methods. In this research, we proposed a new preprocessing technique in order to extract the region of interest (RoI) of skin lesion dataset. We compare the performance of the most state-of-the-art CNN classifiers with two datasets which contain (1) raw, and (2) RoI extracted images. Our experiment results show that training CNN models by RoI extracted dataset can improve the accuracy of the prediction (e.g., InceptionResNetV2, 2.18% improvement). Moreover, it significantly decreases the evaluation (inference) and training time of classifiers as well.
Collapse
Affiliation(s)
- Hadi Zanddizari
- Department of Electrical Engineering, University of South Florida, Tampa, 33620, USA.
| | - Nam Nguyen
- Department of Electrical Engineering, University of South Florida, Tampa, 33620, USA
| | - Behnam Zeinali
- Department of Electrical Engineering, University of South Florida, Tampa, 33620, USA
| | - J Morris Chang
- Department of Electrical Engineering, University of South Florida, Tampa, 33620, USA
| |
Collapse
|
208
|
Elkhader J, Elemento O. Artificial intelligence in oncology: From bench to clinic. Semin Cancer Biol 2021; 84:113-128. [PMID: 33915289 DOI: 10.1016/j.semcancer.2021.04.013] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 03/22/2021] [Accepted: 04/15/2021] [Indexed: 02/07/2023]
Abstract
In the past few years, Artificial Intelligence (AI) techniques have been applied to almost every facet of oncology, from basic research to drug development and clinical care. In the clinical arena where AI has perhaps received the most attention, AI is showing promise in enhancing and automating image-based diagnostic approaches in fields such as radiology and pathology. Robust AI applications, which retain high performance and reproducibility over multiple datasets, extend from predicting indications for drug development to improving clinical decision support using electronic health record data. In this article, we review some of these advances. We also introduce common concepts and fundamentals of AI and its various uses, along with its caveats, to provide an overview of the opportunities and challenges in the field of oncology. Leveraging AI techniques productively to provide better care throughout a patient's medical journey can fuel the predictive promise of precision medicine.
Collapse
Affiliation(s)
- Jamal Elkhader
- HRH Prince Alwaleed Bin Talal Bin Abdulaziz Alsaud Institute for Computational Biomedicine, Dept. of Physiology and Biophysics, Weill Cornell Medicine, New York, NY 10021, USA; Caryl and Israel Englander Institute for Precision Medicine, Weill Cornell Medicine, New York, NY, 10021, USA; Sandra and Edward Meyer Cancer Center, Weill Cornell Medicine, New York, NY, 10065, USA; Tri-Institutional Training Program in Computational Biology and Medicine, New York, NY, 10065, USA
| | - Olivier Elemento
- HRH Prince Alwaleed Bin Talal Bin Abdulaziz Alsaud Institute for Computational Biomedicine, Dept. of Physiology and Biophysics, Weill Cornell Medicine, New York, NY 10021, USA; Caryl and Israel Englander Institute for Precision Medicine, Weill Cornell Medicine, New York, NY, 10021, USA; Sandra and Edward Meyer Cancer Center, Weill Cornell Medicine, New York, NY, 10065, USA; Tri-Institutional Training Program in Computational Biology and Medicine, New York, NY, 10065, USA.
| |
Collapse
|
209
|
Ding X, Wang S. Efficient Unet with depth-aware gated fusion for automatic skin lesion segmentation. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-202566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Melanoma is a very serious disease. The segmentation of skin lesions is a critical step for diagnosing melanoma. However, skin lesions possess the characteristics of large size variations, irregular shapes, blurring borders, and complex background information, thus making the segmentation of skin lesions remain a challenging problem. Though deep learning models usually achieve good segmentation performance for skin lesion segmentation, they have a large number of parameters and FLOPs, which limits their application scenarios. These models also do not make good use of low-level feature maps, which are essential for predicting detailed information. The Proposed EUnet-DGF uses MBconv to implement its lightweight encoder and maintains a strong encoding ability. Moreover, the depth-aware gated fusion block designed by us can fuse feature maps of different depths and help predict pixels on small patterns. The experiments conducted on the ISIC 2017 dataset and PH2 dataset show the superiority of our model. In particular, EUnet-DGF only accounts for 19% and 6.8% of the original Unet in terms of the number of parameters and FLOPs. It possesses a great application potential in practical computer-aided diagnosis systems.
Collapse
Affiliation(s)
- Xiangwen Ding
- College of Computer Science and Technology, Jilin University, Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, China
| | - Shengsheng Wang
- College of Computer Science and Technology, Jilin University, Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, China
| |
Collapse
|
210
|
Srinivasu PN, SivaSai JG, Ijaz MF, Bhoi AK, Kim W, Kang JJ. Classification of Skin Disease Using Deep Learning Neural Networks with MobileNet V2 and LSTM. SENSORS (BASEL, SWITZERLAND) 2021; 21:2852. [PMID: 33919583 PMCID: PMC8074091 DOI: 10.3390/s21082852] [Citation(s) in RCA: 168] [Impact Index Per Article: 42.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Revised: 04/08/2021] [Accepted: 04/16/2021] [Indexed: 12/18/2022]
Abstract
Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region's image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.
Collapse
Affiliation(s)
- Parvathaneni Naga Srinivasu
- Department of Computer Science and Engineering, Gitam Institute of Technology, GITAM Deemed to be University, Rushikonda, Visakhapatnam 530045, India;
| | | | - Muhammad Fazal Ijaz
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Korea;
| | - Akash Kumar Bhoi
- Department of Electrical and Electronics Engineering, Sikkim Manipal Institute of Technology, Sikkim Manipal University, Majitar 737136, India;
| | - Wonjoon Kim
- Division of Future Convergence (HCI Science Major), Dongduk Women’s University, Seoul 02748, Korea
| | - James Jin Kang
- School of Science, Edith Cowan University, Joondalup 6027, Australia
| |
Collapse
|
211
|
A Segmentation of Melanocytic Skin Lesions in Dermoscopic and Standard Images Using a Hybrid Two-Stage Approach. BIOMED RESEARCH INTERNATIONAL 2021; 2021:5562801. [PMID: 33880368 PMCID: PMC8046537 DOI: 10.1155/2021/5562801] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 03/17/2021] [Accepted: 03/26/2021] [Indexed: 11/17/2022]
Abstract
The segmentation of a skin lesion is regarded as very challenging because of the low contrast between the lesion and the surrounding skin, the existence of various artifacts, and different imaging acquisition conditions. The purpose of this study is to segment melanocytic skin lesions in dermoscopic and standard images by using a hybrid model combining a new hierarchical K-means and level set approach, called HK-LS. Although the level set method is usually sensitive to initial estimation, it is widely used in biomedical image segmentation because it can segment more complex images and does not require a large number of manually labelled images. The preprocessing step is used for the proposed model to be less sensitive to intensity inhomogeneity. The proposed method was evaluated on medical skin images from two publicly available datasets including the PH2 database and the Dermofit database. All skin lesions were segmented with high accuracies (>94%) and Dice coefficients (>0.91) of the ground truth on two databases. The quantitative experimental results reveal that the proposed method yielded significantly better results compared to other traditional level set models and has a certain advantage over the segmentation results of U-net in standard images. The proposed method had high clinical applicability for the segmentation of melanocytic skin lesions in dermoscopic and standard images.
Collapse
|
212
|
Liu L, Tsui YY, Mandal M. Skin Lesion Segmentation Using Deep Learning with Auxiliary Task. J Imaging 2021; 7:67. [PMID: 34460517 PMCID: PMC8321325 DOI: 10.3390/jimaging7040067] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 03/22/2021] [Accepted: 03/23/2021] [Indexed: 11/17/2022] Open
Abstract
Skin lesion segmentation is a primary step for skin lesion analysis, which can benefit the subsequent classification task. It is a challenging task since the boundaries of pigment regions may be fuzzy and the entire lesion may share a similar color. Prevalent deep learning methods for skin lesion segmentation make predictions by ensembling different convolutional neural networks (CNN), aggregating multi-scale information, or by multi-task learning framework. The main purpose of doing so is trying to make use of as much information as possible so as to make robust predictions. A multi-task learning framework has been proved to be beneficial for the skin lesion segmentation task, which is usually incorporated with the skin lesion classification task. However, multi-task learning requires extra labeling information which may not be available for the skin lesion images. In this paper, a novel CNN architecture using auxiliary information is proposed. Edge prediction, as an auxiliary task, is performed simultaneously with the segmentation task. A cross-connection layer module is proposed, where the intermediate feature maps of each task are fed into the subblocks of the other task which can implicitly guide the neural network to focus on the boundary region of the segmentation task. In addition, a multi-scale feature aggregation module is proposed, which makes use of features of different scales and enhances the performance of the proposed method. Experimental results show that the proposed method obtains a better performance compared with the state-of-the-art methods with a Jaccard Index (JA) of 79.46, Accuracy (ACC) of 94.32, SEN of 88.76 with only one integrated model, which can be learned in an end-to-end manner.
Collapse
Affiliation(s)
| | | | - Mrinal Mandal
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G1H9, Canada; (L.L.); (Y.Y.T.)
| |
Collapse
|
213
|
Wang KS, Yu G, Xu C, Meng XH, Zhou J, Zheng C, Deng Z, Shang L, Liu R, Su S, Zhou X, Li Q, Li J, Wang J, Ma K, Qi J, Hu Z, Tang P, Deng J, Qiu X, Li BY, Shen WD, Quan RP, Yang JT, Huang LY, Xiao Y, Yang ZC, Li Z, Wang SC, Ren H, Liang C, Guo W, Li Y, Xiao H, Gu Y, Yun JP, Huang D, Song Z, Fan X, Chen L, Yan X, Li Z, Huang ZC, Huang J, Luttrell J, Zhang CY, Zhou W, Zhang K, Yi C, Wu C, Shen H, Wang YP, Xiao HM, Deng HW. Accurate diagnosis of colorectal cancer based on histopathology images using artificial intelligence. BMC Med 2021; 19:76. [PMID: 33752648 PMCID: PMC7986569 DOI: 10.1186/s12916-021-01942-5] [Citation(s) in RCA: 75] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 02/16/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Accurate and robust pathological image analysis for colorectal cancer (CRC) diagnosis is time-consuming and knowledge-intensive, but is essential for CRC patients' treatment. The current heavy workload of pathologists in clinics/hospitals may easily lead to unconscious misdiagnosis of CRC based on daily image analyses. METHODS Based on a state-of-the-art transfer-learned deep convolutional neural network in artificial intelligence (AI), we proposed a novel patch aggregation strategy for clinic CRC diagnosis using weakly labeled pathological whole-slide image (WSI) patches. This approach was trained and validated using an unprecedented and enormously large number of 170,099 patches, > 14,680 WSIs, from > 9631 subjects that covered diverse and representative clinical cases from multi-independent-sources across China, the USA, and Germany. RESULTS Our innovative AI tool consistently and nearly perfectly agreed with (average Kappa statistic 0.896) and even often better than most of the experienced expert pathologists when tested in diagnosing CRC WSIs from multicenters. The average area under the receiver operating characteristics curve (AUC) of AI was greater than that of the pathologists (0.988 vs 0.970) and achieved the best performance among the application of other AI methods to CRC diagnosis. Our AI-generated heatmap highlights the image regions of cancer tissue/cells. CONCLUSIONS This first-ever generalizable AI system can handle large amounts of WSIs consistently and robustly without potential bias due to fatigue commonly experienced by clinical pathologists. It will drastically alleviate the heavy clinical burden of daily pathology diagnosis and improve the treatment for CRC patients. This tool is generalizable to other cancer diagnosis based on image recognition.
Collapse
Affiliation(s)
- K S Wang
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - G Yu
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - C Xu
- Department of Biostatistics and Epidemiology, The University of Oklahoma Health Sciences Center, Oklahoma City, OK, 73104, USA
| | - X H Meng
- Laboratory of Molecular and Statistical Genetics, College of Life Sciences, Hunan Normal University, Changsha, 410081, Hunan, China
| | - J Zhou
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - C Zheng
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - Z Deng
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - L Shang
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - R Liu
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - S Su
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - X Zhou
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - Q Li
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - J Li
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - J Wang
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - K Ma
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - J Qi
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - Z Hu
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - P Tang
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - J Deng
- Department of Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA, 70112, USA
| | - X Qiu
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - B Y Li
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - W D Shen
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - R P Quan
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - J T Yang
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - L Y Huang
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - Y Xiao
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - Z C Yang
- Department of Pharmacology, Xiangya School of Pharmaceutical Sciences, Central South University, Changsha, 410078, Hunan, China
| | - Z Li
- School of Life Sciences, Central South University, Changsha, 410013, Hunan, China
| | - S C Wang
- College of Information Science and Engineering, Hunan Normal University, Changsha, 410081, Hunan, China
| | - H Ren
- Department of Pathology, Gongli Hospital, Second Military Medical University, Shanghai, 200135, China
- Department of Pathology, the Peace Hospital Affiliated to Changzhi Medical College, Changzhi, 046000, China
| | - C Liang
- Pathological Laboratory of Adicon Medical Laboratory Co., Ltd, Hangzhou, 310023, Zhejiang, China
| | - W Guo
- Department of Pathology, First Affiliated Hospital of Hunan Normal University, The People's Hospital of Hunan Province, Changsha, 410005, Hunan, China
| | - Y Li
- Department of Pathology, First Affiliated Hospital of Hunan Normal University, The People's Hospital of Hunan Province, Changsha, 410005, Hunan, China
| | - H Xiao
- Department of Pathology, the Third Xiangya Hospital, Central South University, Changsha, 410013, Hunan, China
| | - Y Gu
- Department of Pathology, the Third Xiangya Hospital, Central South University, Changsha, 410013, Hunan, China
| | - J P Yun
- Department of Pathology, Sun Yat-Sen University Cancer Center, Guangzhou, 510060, China
| | - D Huang
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
| | - Z Song
- Department of Pathology, Chinese PLA General Hospital, Beijing, 100853, China
| | - X Fan
- Department of Pathology, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, China
| | - L Chen
- Department of Pathology, The first affiliated hospital, Air Force Medical University, Xi'an, 710032, China
| | - X Yan
- Institute of Pathology and southwest cancer center, Southwest Hospital, Third Military Medical University, Chongqing, 400038, China
| | - Z Li
- Department of Pathology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, 510080, China
| | - Z C Huang
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - J Huang
- Department of Anatomy and Neurobiology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - J Luttrell
- School of Computing Sciences and Computer Engineering, University of Southern Mississippi, Hattiesburg, MS, 39406, USA
| | - C Y Zhang
- School of Computing Sciences and Computer Engineering, University of Southern Mississippi, Hattiesburg, MS, 39406, USA
| | - W Zhou
- College of Computing, Michigan Technological University, Houghton, MI, 49931, USA
| | - K Zhang
- Department of Computer Science, Bioinformatics Facility of Xavier NIH RCMI Cancer Research Center, Xavier University of Louisiana, New Orleans, LA, 70125, USA
| | - C Yi
- Department of Pathology, Ochsner Medical Center, New Orleans, LA, 70121, USA
| | - C Wu
- Department of Statistics, Florida State University, Tallahassee, FL, 32306, USA
| | - H Shen
- Department of Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA, 70112, USA
- Division of Biomedical Informatics and Genomics, Deming Department of Medicine, Tulane University School of Medicine, New Orleans, LA, 70112, USA
| | - Y P Wang
- Department of Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA, 70112, USA
- Department of Biomedical Engineering, Tulane University, New Orleans, LA, 70118, USA
| | - H M Xiao
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China.
| | - H W Deng
- Department of Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA, 70112, USA.
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China.
- Division of Biomedical Informatics and Genomics, Deming Department of Medicine, Tulane University School of Medicine, New Orleans, LA, 70112, USA.
| |
Collapse
|
214
|
Tong X, Wei J, Sun B, Su S, Zuo Z, Wu P. ASCU-Net: Attention Gate, Spatial and Channel Attention U-Net for Skin Lesion Segmentation. Diagnostics (Basel) 2021; 11:501. [PMID: 33809048 PMCID: PMC7999819 DOI: 10.3390/diagnostics11030501] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 03/08/2021] [Accepted: 03/09/2021] [Indexed: 01/29/2023] Open
Abstract
Segmentation of skin lesions is a challenging task because of the wide range of skin lesion shapes, sizes, colors, and texture types. In the past few years, deep learning networks such as U-Net have been successfully applied to medical image segmentation and exhibited faster and more accurate performance. In this paper, we propose an extended version of U-Net for the segmentation of skin lesions using the concept of the triple attention mechanism. We first selected regions using attention coefficients computed by the attention gate and contextual information. Second, a dual attention decoding module consisting of spatial attention and channel attention was used to capture the spatial correlation between features and improve segmentation performance. The combination of the three attentional mechanisms helped the network to focus on a more relevant field of view of the target. The proposed model was evaluated using three datasets, ISIC-2016, ISIC-2017, and PH2. The experimental results demonstrated the effectiveness of our method with strong robustness to the presence of irregular borders, lesion and skin smooth transitions, noise, and artifacts.
Collapse
Affiliation(s)
| | - Junyu Wei
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China; (X.T.); (B.S.); (S.S.); (Z.Z.); (P.W.)
| | | | | | | | | |
Collapse
|
215
|
Kleppe A, Skrede OJ, De Raedt S, Liestøl K, Kerr DJ, Danielsen HE. Designing deep learning studies in cancer diagnostics. Nat Rev Cancer 2021; 21:199-211. [PMID: 33514930 DOI: 10.1038/s41568-020-00327-9] [Citation(s) in RCA: 160] [Impact Index Per Article: 40.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 12/09/2020] [Indexed: 12/16/2022]
Abstract
The number of publications on deep learning for cancer diagnostics is rapidly increasing, and systems are frequently claimed to perform comparable with or better than clinicians. However, few systems have yet demonstrated real-world medical utility. In this Perspective, we discuss reasons for the moderate progress and describe remedies designed to facilitate transition to the clinic. Recent, presumably influential, deep learning studies in cancer diagnostics, of which the vast majority used images as input to the system, are evaluated to reveal the status of the field. By manipulating real data, we then exemplify that much and varied training data facilitate the generalizability of neural networks and thus the ability to use them clinically. To reduce the risk of biased performance estimation of deep learning systems, we advocate evaluation in external cohorts and strongly advise that the planned analyses, including a predefined primary analysis, are described in a protocol preferentially stored in an online repository. Recommended protocol items should be established for the field, and we present our suggestions.
Collapse
Affiliation(s)
- Andreas Kleppe
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Ole-Johan Skrede
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Sepp De Raedt
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Knut Liestøl
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - David J Kerr
- Nuffield Division of Clinical Laboratory Sciences, University of Oxford, Oxford, UK
| | - Håvard E Danielsen
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, Oslo, Norway.
- Department of Informatics, University of Oslo, Oslo, Norway.
- Nuffield Division of Clinical Laboratory Sciences, University of Oxford, Oxford, UK.
| |
Collapse
|
216
|
Khan MA, Akram T, Zhang YD, Sharif M. Attributes based skin lesion detection and recognition: A mask RCNN and transfer learning-based deep learning framework. Pattern Recognit Lett 2021; 143:58-66. [DOI: 10.1016/j.patrec.2020.12.015] [Citation(s) in RCA: 58] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
217
|
Xue C, Zhu L, Fu H, Hu X, Li X, Zhang H, Heng PA. Global guidance network for breast lesion segmentation in ultrasound images. Med Image Anal 2021; 70:101989. [PMID: 33640719 DOI: 10.1016/j.media.2021.101989] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 01/28/2021] [Accepted: 01/29/2021] [Indexed: 12/01/2022]
Abstract
Automatic breast lesion segmentation in ultrasound helps to diagnose breast cancer, which is one of the dreadful diseases that affect women globally. Segmenting breast regions accurately from ultrasound image is a challenging task due to the inherent speckle artifacts, blurry breast lesion boundaries, and inhomogeneous intensity distributions inside the breast lesion regions. Recently, convolutional neural networks (CNNs) have demonstrated remarkable results in medical image segmentation tasks. However, the convolutional operations in a CNN often focus on local regions, which suffer from limited capabilities in capturing long-range dependencies of the input ultrasound image, resulting in degraded breast lesion segmentation accuracy. In this paper, we develop a deep convolutional neural network equipped with a global guidance block (GGB) and breast lesion boundary detection (BD) modules for boosting the breast ultrasound lesion segmentation. The GGB utilizes the multi-layer integrated feature map as a guidance information to learn the long-range non-local dependencies from both spatial and channel domains. The BD modules learn additional breast lesion boundary map to enhance the boundary quality of a segmentation result refinement. Experimental results on a public dataset and a collected dataset show that our network outperforms other medical image segmentation methods and the recent semantic segmentation methods on breast ultrasound lesion segmentation. Moreover, we also show the application of our network on the ultrasound prostate segmentation, in which our method better identifies prostate regions than state-of-the-art networks.
Collapse
Affiliation(s)
- Cheng Xue
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Lei Zhu
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Hong Kong, China.
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence, Abu Dhabi, UAE
| | - Xiaowei Hu
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Hai Zhang
- Shenzhen People's Hospital, The Second Clinical College of Jinan University, The First Affiliated Hospital of Southern University of Science and Technology, Guangdong Province, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong. Shenzhen Key Laboratory of Virtual Reality and Human Interaction Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
| |
Collapse
|
218
|
Zhang J, Xie Y, Wang Y, Xia Y. Inter-Slice Context Residual Learning for 3D Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:661-672. [PMID: 33125324 DOI: 10.1109/tmi.2020.3034995] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Automated and accurate 3D medical image segmentation plays an essential role in assisting medical professionals to evaluate disease progresses and make fast therapeutic schedules. Although deep convolutional neural networks (DCNNs) have widely applied to this task, the accuracy of these models still need to be further improved mainly due to their limited ability to 3D context perception. In this paper, we propose the 3D context residual network (ConResNet) for the accurate segmentation of 3D medical images. This model consists of an encoder, a segmentation decoder, and a context residual decoder. We design the context residual module and use it to bridge both decoders at each scale. Each context residual module contains both context residual mapping and context attention mapping, the formal aims to explicitly learn the inter-slice context information and the latter uses such context as a kind of attention to boost the segmentation accuracy. We evaluated this model on the MICCAI 2018 Brain Tumor Segmentation (BraTS) dataset and NIH Pancreas Segmentation (Pancreas-CT) dataset. Our results not only demonstrate the effectiveness of the proposed 3D context residual learning scheme but also indicate that the proposed ConResNet is more accurate than six top-ranking methods in brain tumor segmentation and seven top-ranking methods in pancreas segmentation.
Collapse
|
219
|
High-resolution dermoscopy image synthesis with conditional generative adversarial networks. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102224] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
220
|
Jin Q, Cui H, Sun C, Meng Z, Su R. Cascade knowledge diffusion network for skin lesion diagnosis and segmentation. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2020.106881] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
221
|
Li X, Yu L, Chen H, Fu CW, Xing L, Heng PA. Transformation-Consistent Self-Ensembling Model for Semisupervised Medical Image Segmentation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:523-534. [PMID: 32479407 DOI: 10.1109/tnnls.2020.2995319] [Citation(s) in RCA: 142] [Impact Index Per Article: 35.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
A common shortfall of supervised deep learning for medical imaging is the lack of labeled data, which is often expensive and time consuming to collect. This article presents a new semisupervised method for medical image segmentation, where the network is optimized by a weighted combination of a common supervised loss only for the labeled inputs and a regularization loss for both the labeled and unlabeled data. To utilize the unlabeled data, our method encourages consistent predictions of the network-in-training for the same input under different perturbations. With the semisupervised segmentation tasks, we introduce a transformation-consistent strategy in the self-ensembling model to enhance the regularization effect for pixel-level predictions. To further improve the regularization effects, we extend the transformation in a more generalized form including scaling and optimize the consistency loss with a teacher model, which is an averaging of the student model weights. We extensively validated the proposed semisupervised method on three typical yet challenging medical image segmentation tasks: 1) skin lesion segmentation from dermoscopy images in the International Skin Imaging Collaboration (ISIC) 2017 data set; 2) optic disk (OD) segmentation from fundus images in the Retinal Fundus Glaucoma Challenge (REFUGE) data set; and 3) liver segmentation from volumetric CT scans in the Liver Tumor Segmentation Challenge (LiTS) data set. Compared with state-of-the-art, our method shows superior performance on the challenging 2-D/3-D medical images, demonstrating the effectiveness of our semisupervised method for medical image segmentation.
Collapse
|
222
|
Adegun A, Viriri S. Deep learning techniques for skin lesion analysis and melanoma cancer detection: a survey of state-of-the-art. Artif Intell Rev 2021; 54:811-841. [DOI: 10.1007/s10462-020-09865-y] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
223
|
Nikesh P, Raju G. Automatic Skin Lesion Segmentation—A Novel Approach of Lesion Filling through Pixel Path. PATTERN RECOGNITION AND IMAGE ANALYSIS 2021. [DOI: 10.1134/s1054661820040215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
224
|
AIM in Oncology. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_94-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
225
|
Wu H, Pan J, Li Z, Wen Z, Qin J. Automated Skin Lesion Segmentation Via an Adaptive Dual Attention Module. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:357-370. [PMID: 32986547 DOI: 10.1109/tmi.2020.3027341] [Citation(s) in RCA: 64] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We present a convolutional neural network (CNN) equipped with a novel and efficient adaptive dual attention module (ADAM) for automated skin lesion segmentation from dermoscopic images, which is an essential yet challenging step for the development of a computer-assisted skin disease diagnosis system. The proposed ADAM has three compelling characteristics. First, we integrate two global context modeling mechanisms into the ADAM, one aiming at capturing the boundary continuity of skin lesion by global average pooling while the other dealing with the shape irregularity by pixel-wise correlation. In this regard, our network, thanks to the proposed ADAM, is capable of extracting more comprehensive and discriminative features for recognizing the boundary of skin lesions. Second, the proposed ADAM supports multi-scale resolution fusion, and hence can capture multi-scale features to further improve the segmentation accuracy. Third, as we harness a spatial information weighting method in the proposed network, our method can reduce a lot of redundancies compared with traditional CNNs. The proposed network is implemented based on a dual encoder architecture, which is able to enlarge the receptive field without greatly increasing the network parameters. In addition, we assign different dilation rates to different ADAMs so that it can adaptively capture distinguishing features according to the size of a lesion. We extensively evaluate the proposed method on both ISBI2017 and ISIC2018 datasets and the experimental results demonstrate that, without using network ensemble schemes, our method is capable of achieving better segmentation performance than state-of-the-art deep learning models, particularly those equipped with attention mechanisms.
Collapse
|
226
|
Lightweight encoder-decoder model for automatic skin lesion segmentation. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100640] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
227
|
Yang J, Li S, Wang Z, Dong H, Wang J, Tang S. Using Deep Learning to Detect Defects in Manufacturing: A Comprehensive Survey and Current Challenges. MATERIALS 2020; 13:ma13245755. [PMID: 33339413 PMCID: PMC7766692 DOI: 10.3390/ma13245755] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Revised: 12/05/2020] [Accepted: 12/07/2020] [Indexed: 12/18/2022]
Abstract
The detection of product defects is essential in quality control in manufacturing. This study surveys stateoftheart deep-learning methods in defect detection. First, we classify the defects of products, such as electronic components, pipes, welded parts, and textile materials, into categories. Second, recent mainstream techniques and deep-learning methods for defects are reviewed with their characteristics, strengths, and shortcomings described. Third, we summarize and analyze the application of ultrasonic testing, filtering, deep learning, machine vision, and other technologies used for defect detection, by focusing on three aspects, namely method and experimental results. To further understand the difficulties in the field of defect detection, we investigate the functions and characteristics of existing equipment used for defect detection. The core ideas and codes of studies related to high precision, high positioning, rapid detection, small object, complex background, occluded object detection and object association, are summarized. Lastly, we outline the current achievements and limitations of the existing methods, along with the current research challenges, to assist the research community on defect detection in setting a further agenda for future studies.
Collapse
Affiliation(s)
- Jing Yang
- School of Mechanical Engineering, Guizhou University, Guiyang 550025, China; (J.Y.); (Z.W.); (H.D.); (J.W.)
- Guizhou Provincial Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China
| | - Shaobo Li
- School of Mechanical Engineering, Guizhou University, Guiyang 550025, China; (J.Y.); (Z.W.); (H.D.); (J.W.)
- Guizhou Provincial Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China
- Key Laboratory of Advanced Manufacturing Technology of Ministry of Education, Guizhou University, Guiyang 550025, China;
- Correspondence:
| | - Zheng Wang
- School of Mechanical Engineering, Guizhou University, Guiyang 550025, China; (J.Y.); (Z.W.); (H.D.); (J.W.)
| | - Hao Dong
- School of Mechanical Engineering, Guizhou University, Guiyang 550025, China; (J.Y.); (Z.W.); (H.D.); (J.W.)
| | - Jun Wang
- School of Mechanical Engineering, Guizhou University, Guiyang 550025, China; (J.Y.); (Z.W.); (H.D.); (J.W.)
| | - Shihao Tang
- Key Laboratory of Advanced Manufacturing Technology of Ministry of Education, Guizhou University, Guiyang 550025, China;
| |
Collapse
|
228
|
Wu J, Hu W, Wen Y, Tu W, Liu X. Skin Lesion Classification Using Densely Connected Convolutional Networks with Attention Residual Learning. SENSORS (BASEL, SWITZERLAND) 2020; 20:E7080. [PMID: 33321864 PMCID: PMC7764313 DOI: 10.3390/s20247080] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2020] [Revised: 12/02/2020] [Accepted: 12/09/2020] [Indexed: 11/16/2022]
Abstract
Skin lesion classification is an effective approach aided by computer vision for the diagnosis of skin cancer. Though deep learning models presented advantages over traditional methods and brought tremendous breakthroughs, a precise diagnosis is still challenging because of the intra-class variation and inter-class similarity caused by the diversity of imaging methods and clinicopathology. In this paper, we propose a densely connected convolutional network with an attention and residual learning (ARDT-DenseNet) method for skin lesion classification. Each ARDT block consists of dense blocks, transition blocks and attention and residual modules. Compared to a residual network with the same number of convolutional layers, the size of the parameters of the densely connected network proposed in this paper has been reduced by half, while the accuracy of skin lesion classification is preserved. Our improved densely connected network adds an attention mechanism and residual learning after each dense block and transition block without introducing additional parameters. We evaluate the ARDT-DenseNet model with the ISIC 2016 and ISIC 2017 datasets. Our method achieves an ACC of 85.7% and an AUC of 83.7% in skin lesion classification with ISIC 2016 and an average AUC of 91.8% in skin lesion classification with ISIC 2017. The experimental results show that the method proposed in this paper has achieved a significant improvement in skin lesion classification, which is superior to that of the state-of-the-art method.
Collapse
Affiliation(s)
- Jing Wu
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, College of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430081, China; (W.H.); (W.T.); (X.L.)
| | - Wei Hu
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, College of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430081, China; (W.H.); (W.T.); (X.L.)
| | - Yuan Wen
- School of Computer Science and Statistics, Trinity College Dublin, Dublin 2, Ireland
| | - Wenli Tu
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, College of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430081, China; (W.H.); (W.T.); (X.L.)
| | - Xiaoming Liu
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, College of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430081, China; (W.H.); (W.T.); (X.L.)
| |
Collapse
|
229
|
Mahbod A, Tschandl P, Langs G, Ecker R, Ellinger I. The effects of skin lesion segmentation on the performance of dermatoscopic image classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 197:105725. [PMID: 32882594 DOI: 10.1016/j.cmpb.2020.105725] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Accepted: 08/21/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Malignant melanoma (MM) is one of the deadliest types of skin cancer. Analysing dermatoscopic images plays an important role in the early detection of MM and other pigmented skin lesions. Among different computer-based methods, deep learning-based approaches and in particular convolutional neural networks have shown excellent classification and segmentation performances for dermatoscopic skin lesion images. These models can be trained end-to-end without requiring any hand-crafted features. However, the effect of using lesion segmentation information on classification performance has remained an open question. METHODS In this study, we explicitly investigated the impact of using skin lesion segmentation masks on the performance of dermatoscopic image classification. To do this, first, we developed a baseline classifier as the reference model without using any segmentation masks. Then, we used either manually or automatically created segmentation masks in both training and test phases in different scenarios and investigated the classification performances. The different scenarios included approaches that exploited the segmentation masks either for cropping of skin lesion images or removing the surrounding background or using the segmentation masks as an additional input channel for model training. RESULTS Evaluated on the ISIC 2017 challenge dataset which contained two binary classification tasks (i.e. MM vs. all and seborrheic keratosis (SK) vs. all) and based on the derived area under the receiver operating characteristic curve scores, we observed four main outcomes. Our results show that 1) using segmentation masks did not significantly improve the MM classification performance in any scenario, 2) in one of the scenarios (using segmentation masks for dilated cropping), SK classification performance was significantly improved, 3) removing all background information by the segmentation masks significantly degraded the overall classification performance, and 4) in case of using the appropriate scenario (using segmentation for dilated cropping), there is no significant difference of using manually or automatically created segmentation masks. CONCLUSIONS We systematically explored the effects of using image segmentation on the performance of dermatoscopic skin lesion classification.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria.
| | - Philipp Tschandl
- Department of Dermatology, Medical University of Vienna, Vienna, Austria
| | - Georg Langs
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Rupert Ecker
- Research and Development Department of TissueGnostics GmbH, Vienna, Austria
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
230
|
Manzo M, Pellino S. Bucket of Deep Transfer Learning Features and Classification Models for Melanoma Detection. J Imaging 2020; 6:129. [PMID: 34460526 PMCID: PMC8321205 DOI: 10.3390/jimaging6120129] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 11/18/2020] [Accepted: 11/23/2020] [Indexed: 02/04/2023] Open
Abstract
Malignant melanoma is the deadliest form of skin cancer and, in recent years, is rapidly growing in terms of the incidence worldwide rate. The most effective approach to targeted treatment is early diagnosis. Deep learning algorithms, specifically convolutional neural networks, represent a methodology for the image analysis and representation. They optimize the features design task, essential for an automatic approach on different types of images, including medical. In this paper, we adopted pretrained deep convolutional neural networks architectures for the image representation with purpose to predict skin lesion melanoma. Firstly, we applied a transfer learning approach to extract image features. Secondly, we adopted the transferred learning features inside an ensemble classification context. Specifically, the framework trains individual classifiers on balanced subspaces and combines the provided predictions through statistical measures. Experimental phase on datasets of skin lesion images is performed and results obtained show the effectiveness of the proposed approach with respect to state-of-the-art competitors.
Collapse
Affiliation(s)
- Mario Manzo
- Information Technology Services, University of Naples “L’Orientale”, 80121 Naples, Italy
| | - Simone Pellino
- Department of Applied Science, I.S. Mattei Aversa M.I.U.R., 81031 Rome, Italy;
| |
Collapse
|
231
|
Lee J, Nishikawa RM. Cross-organ, cross-modality transfer learning: feasibility study for segmentation and classification. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:210194-210205. [PMID: 33680628 PMCID: PMC7935042 DOI: 10.1109/access.2020.3038909] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We conducted two analyses by comparing the transferability of a traditionally transfer-learned CNN (TL) to that of a CNN fine-tuned with an unrelated set of medical images (mammograms in this study) first and then fine-tuned a second time using TL, which we call the cross-organ, cross-modality transfer learned (XTL) network, on 1) multiple sclerosis (MS) segmentation of brain magnetic resonance (MR) images and 2) tumor malignancy classification of multi-parametric prostate MR images. We used 2133 screening mammograms and two public challenge datasets (longitudinal MS lesion segmentation and ProstateX) as intermediate and target datasets for XTL, respectively. We used two CNN architectures as basis networks for each analysis and fine-tuned it to match the target image types (volumetric) and tasks (segmentation and classification). We evaluated the XTL networks against the traditional TL networks using Dice coefficient and AUC as figure of merits for each analysis, respectively. For the segmentation test, XTL networks outperformed TL networks in terms of Dice coefficient (Dice coefficients of 0.72 vs [0.70 - 0.71] with p-value < 0.0001 in differences). For the classification test, XTL networks (AUCs = 0.77 - 0.80) outperformed TL networks (AUC = 0.73 - 0.75). The difference in the AUCs (AUCdiff = 0.045 - 0.047) was statistically significant (p-value < 0.03). We showed XTL using mammograms improves the network performance compared to traditional TL, despite the difference in image characteristics (x-ray vs. MRI and 2D vs. 3D) and imaging tasks (classification vs. segmentation for one of the tasks).
Collapse
Affiliation(s)
- Juhun Lee
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213 USA
| | | |
Collapse
|
232
|
Vasconcelos CN, Vasconcelos BN. Experiments using deep learning for dermoscopy image analysis. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2017.11.005] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
233
|
Guo L, Xie G, Xu X, Ren J. Effective Melanoma Recognition Using Deep Convolutional Neural Network with Covariance Discriminant Loss. SENSORS (BASEL, SWITZERLAND) 2020; 20:E5786. [PMID: 33066123 PMCID: PMC7601957 DOI: 10.3390/s20205786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 09/27/2020] [Accepted: 10/09/2020] [Indexed: 11/21/2022]
Abstract
Melanoma recognition is challenging due to data imbalance and high intra-class variations and large inter-class similarity. Aiming at the issues, we propose a melanoma recognition method using deep convolutional neural network with covariance discriminant loss in dermoscopy images. Deep convolutional neural network is trained under the joint supervision of cross entropy loss and covariance discriminant loss, rectifying the model outputs and the extracted features simultaneously. Specifically, we design an embedding loss, namely covariance discriminant loss, which takes the first and second distance into account simultaneously for providing more constraints. By constraining the distance between hard samples and minority class center, the deep features of melanoma and non-melanoma can be separated effectively. To mine the hard samples, we also design the corresponding algorithm. Further, we analyze the relationship between the proposed loss and other losses. On the International Symposium on Biomedical Imaging (ISBI) 2018 Skin Lesion Analysis dataset, the two schemes in the proposed method can yield a sensitivity of 0.942 and 0.917, respectively. The comprehensive results have demonstrated the efficacy of the designed embedding loss and the proposed methodology.
Collapse
Affiliation(s)
- Lei Guo
- College of Information and Computer, Taiyuan University of Technology, Taiyuan 030024, China;
| | - Gang Xie
- College of Electrical and Power Engineering, Taiyuan University of Technology, Taiyuan 030024, China;
- Shanxi Key Laboratory of Advanced Control and Intelligent Information System, School of Electronic Information Engineering, Taiyuan University of Science and Technology, Taiyuan 030024, China
| | - Xinying Xu
- College of Electrical and Power Engineering, Taiyuan University of Technology, Taiyuan 030024, China;
| | - Jinchang Ren
- College of Electrical and Power Engineering, Taiyuan University of Technology, Taiyuan 030024, China;
- Department of Electronic and Electrical Engineering, University of Strathclyde, Glasgow G1 1XW, UK
| |
Collapse
|
234
|
Thurnhofer-Hemsi K, Domínguez E. A Convolutional Neural Network Framework for Accurate Skin Cancer Detection. Neural Process Lett 2020. [DOI: 10.1007/s11063-020-10364-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
235
|
He X, Su J, Wang G, Zhang K, Alexander N, HSU C, Li F, Chen M, Huang K, Yu N, Huang W, Bu W, Wang Y, Zhao S, Chen X. AI-Provided Instant Differential Diagnosis of Pemphigus Vulgaris and Bullous Pemphigoid (Preprint). JMIR Med Inform 2020. [DOI: 10.2196/24845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
236
|
Hosny KM, Kassem MA, Fouad MM. Classification of Skin Lesions into Seven Classes Using Transfer Learning with AlexNet. J Digit Imaging 2020; 33:1325-1334. [PMID: 32607904 PMCID: PMC7573031 DOI: 10.1007/s10278-020-00371-9] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Melanoma is deadly skin cancer. There is a high similarity between different kinds of skin lesions, which lead to incorrect classification. Accurate classification of a skin lesion in its early stages saves human life. In this paper, a highly accurate method proposed for the skin lesion classification process. The proposed method utilized transfer learning with pre-trained AlexNet. The parameters of the original model used as initial values, where we randomly initialize the weights of the last three replaced layers. The proposed method was tested using the most recent public dataset, ISIC 2018. Based on the obtained results, we could say that the proposed method achieved a great success where it accurately classifies the skin lesions into seven classes. These classes are melanoma, melanocytic nevus, basal cell carcinoma, actinic keratosis, benign keratosis, dermatofibroma, and vascular lesion. The achieved percentages are 98.70%, 95.60%, 99.27%, and 95.06% for accuracy, sensitivity, specificity, and precision, respectively.
Collapse
Affiliation(s)
- Khalid M. Hosny
- Department of Information Technology, Faculty of Computers and Informatics, Zagazig, University, Zagazig 44519, Egypt
| | - Mohamed A. Kassem
- Department of Robotics and Intelligent Machines, Faculty of Artificial Intelligence, KafrElSheikh University, KafrElSheikh, 33511 Egypt
| | - Mohamed M. Fouad
- Department of Electronics and Communication, Faculty of Engineering, Zagazig University, Zagazig 44519, Egypt
| |
Collapse
|
237
|
Jayalakshmi D., Dheeba J.. Border Detection in Skin Lesion Images Using an Improved Clustering Algorithm. INTERNATIONAL JOURNAL OF E-COLLABORATION 2020. [DOI: 10.4018/ijec.2020100102] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The incidence of skin cancer has been increasing in recent years and it can become dangerous if not detected early. Computer-aided diagnosis systems can help the dermatologists in assisting with skin cancer detection by examining the features more critically. In this article, a detailed review of pre-processing and segmentation methods is done on skin lesion images by investigating existing and prevalent segmentation methods for the diagnosis of skin cancer. The pre-processing stage is divided into two phases, in the first phase, a median filter is used to remove the artifact; and in the second phase, an improved K-means clustering with outlier removal (KMOR) algorithm is suggested. The proposed method was tested in a publicly available Danderm database. The improved cluster-based algorithm gives an accuracy of 92.8% with a sensitivity of 93% and specificity of 90% with an AUC value of 0.90435. From the experimental results, it is evident that the clustering algorithm has performed well in detecting the border of the lesion and is suitable for pre-processing dermoscopic images.
Collapse
Affiliation(s)
| | - Dheeba J.
- Vellore Institute of Technology, India
| |
Collapse
|
238
|
Khan MA, Akram T, Sharif M, Javed K, Rashid M, Bukhari SAC. An integrated framework of skin lesion detection and recognition through saliency method and optimal deep neural network features selection. Neural Comput Appl 2020; 32:15929-15948. [DOI: 10.1007/s00521-019-04514-0] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Accepted: 10/09/2019] [Indexed: 12/22/2022]
|
239
|
Tang P, Liang Q, Yan X, Xiang S, Zhang D. GP-CNN-DTEL: Global-Part CNN Model With Data-Transformed Ensemble Learning for Skin Lesion Classification. IEEE J Biomed Health Inform 2020; 24:2870-2882. [DOI: 10.1109/jbhi.2020.2977013] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
240
|
Burlina PM, Joshi NJ, Mathew PA, Paul W, Rebman AW, Aucott JN. AI-based detection of erythema migrans and disambiguation against other skin lesions. Comput Biol Med 2020; 125:103977. [DOI: 10.1016/j.compbiomed.2020.103977] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Revised: 08/14/2020] [Accepted: 08/15/2020] [Indexed: 12/28/2022]
|
241
|
Birkenfeld JS, Tucker-Schwartz JM, Soenksen LR, Avilés-Izquierdo JA, Marti-Fuster B. Computer-aided classification of suspicious pigmented lesions using wide-field images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 195:105631. [PMID: 32652382 DOI: 10.1016/j.cmpb.2020.105631] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Accepted: 06/21/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Early identification of melanoma is conducted through whole-body visual examinations to detect suspicious pigmented lesions, a situation that fluctuates in accuracy depending on the experience and time of the examiner. Computer-aided diagnosis tools for skin lesions are typically trained using pre-selected single-lesion images, taken under controlled conditions, which limits their use in wide-field scenes. Here, we propose a computer-aided classifier system with such input conditions to aid in the rapid identification of suspicious pigmented lesions at the primary care level. METHODS 133 patients with a multitude of skin lesions were recruited for this study. All lesions were examined by a board-certified dermatologist and classified into "suspicious" and "non-suspicious". A new clinical database was acquired and created by taking Wide-Field images of all major body parts with a consumer-grade camera under natural illumination condition and with a consistent source of image variability. 3-8 images were acquired per patient on different sites of the body, and a total of 1759 pigmented lesions were extracted. A machine learning classifier was optimized and build into a computer aided classification system to binary classify each lesion using a suspiciousness score. RESULTS In a testing set, our computer-aided classification system achieved a sensitivity of 100% for suspicious pigmented lesions that were later confirmed by dermoscopy examination ("SPL_A") and 83.2% for suspicious pigmented lesions that were not confirmed after examination ("SPL_B"). Sensitivity for non-suspicious lesions was 72.1%, and accuracy was 75.9%. With these results we defined a suspiciousness score that is aligned with common macro-screening (naked eye) practices. CONCLUSIONS This work demonstrates that wide-field photography combined with computer-aided classification systems can distinguish suspicious from non-suspicious pigmented lesions, and might be effective to assess the severity of a suspicious pigmented lesions. We believe this approach could be useful to support skin screenings at a population-level.
Collapse
Affiliation(s)
- Judith S Birkenfeld
- Research Laboratory of Electronics, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA; MIT linQ, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA 02139, USA; Brigham and Women's Hospital - Harvard Medical School, 75 Francis St, Boston, MA 02115, United States; Massachusetts General Hospital - Harvard Medical School, 55 Fruit St, Boston, MA 02114, United States.
| | - Jason M Tucker-Schwartz
- MIT linQ, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA 02139, USA
| | - Luis R Soenksen
- Research Laboratory of Electronics, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA; MIT linQ, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA 02139, USA; Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA; Wyss Institute for Biologically Inspired Engineering, Harvard University, 3 Blackfan Cir, Boston, MA 02115, USA; Harvard-MIT Program in Health Sciences and Technology, Cambridge, MA 02139, USA
| | - José A Avilés-Izquierdo
- Department of Dermatology, Hospital General Universitario Gregorio Marañón, Calle del Dr. Esquerdo 46, 28007 Madrid, Spain
| | - Berta Marti-Fuster
- Research Laboratory of Electronics, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA; MIT linQ, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA 02139, USA; Brigham and Women's Hospital - Harvard Medical School, 75 Francis St, Boston, MA 02115, United States
| |
Collapse
|
242
|
Kuang Z, Deng X, Yu L, Wang H, Li T, Wang S. Ψ-Net: Focusing on the border areas of intracerebral hemorrhage on CT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 194:105546. [PMID: 32474252 DOI: 10.1016/j.cmpb.2020.105546] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Revised: 05/11/2020] [Accepted: 05/12/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE The volume of the intracerebral hemorrhage (ICH) obtained from CT scans is essential for quantification and treatment planning. However,a fast and accurate volume acquisition brings great challenges. On the one hand, it is both time consuming and operator dependent for manual segmentation, which is the gold standard for volume estimation. On the other hand, low contrast to normal tissues, irregular shapes and distributions of the hemorrhage make the existing automatic segmentation methods hard to achieve satisfactory performance. METHOD To solve above problems, a CNN-based architecture is proposed in this work, consisting of a novel model, which is named as Ψ-Net and a multi-level training strategy. In the structure of Ψ-Net, a self-attention block and a contextual-attention block is designed to suppresses the irrelevant information and segment border areas of the hemorrhage more finely. Further, an multi-level training strategy is put forward to facilitate the training process. By adding the slice-level learning and a weighted loss, the multi-level training strategy effectively alleviates the problems of vanishing gradient and the class imbalance. The proposed training strategy could be applied to most of the segmentation networks, especially for complex models and on small datasets. RESULTS The proposed architecture is evaluated on a spontaneous ICH dataset and a traumatic ICH dataset. Compared to the previous works on the ICH sementation, the proposed architecture obtains the state-of-the-art performance(Dice of 0.950) on the spontaneous ICH, and comparable results(Dice of 0.895) with the best method on the traumatic ICH. On the other hand, the time consumption of the proposed architecture is much less than the previous methods on both training and inference. Morever, experiment results on various of models prove the universality of the multi-level training strategy. CONCLUSIONS This study proposed a novel CNN-based architecture, Ψ-Net with multi-level training strategy. It takes less time for training and achives superior performance than previous ICH segmentaion methods.
Collapse
Affiliation(s)
- Zhuo Kuang
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Xianbo Deng
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China
| | - Li Yu
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Hongkui Wang
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Tiansong Li
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Shengwei Wang
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, 430074, China
| |
Collapse
|
243
|
Attia M, Hossny M, Zhou H, Nahavandi S, Asadi H, Yazdabadi A. Realistic hair simulator for skin lesion images: A novel benchemarking tool. Artif Intell Med 2020; 108:101933. [PMID: 32972662 DOI: 10.1016/j.artmed.2020.101933] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2019] [Revised: 06/05/2020] [Accepted: 07/13/2020] [Indexed: 11/15/2022]
Abstract
Automated skin lesion analysis is one of the trending fields that has gained attention among the dermatologists and health care practitioners. Skin lesion restoration is an essential pre-processing step for lesion enhancements for accurate automated analysis and diagnosis by both dermatologists and computer-aided diagnosis tools. Hair occlusion is one of the most popular artifacts in dermatoscopic images. It can negatively impact the skin lesions diagnosis by both dermatologists and automated computer diagnostic tools. Digital hair removal is a non-invasive method for image enhancement for decrease the hair-occlusion artifact in previously captured images. Several hair removal methods were proposed for skin delineation and removal without standardized benchmarking techniques. Manual annotation is one of the main challenges that hinder the validation of these proposed methods on a large number of images or against benchmarking datasets for comparison purposes. In the presented work, we propose a photo-realistic hair simulator based on context-aware image synthesis using image-to-image translation techniques via conditional adversarial generative networks for generation of different hair occlusions in skin images, along with ground-truth mask for hair location. Hair-occluded image is synthesized using the latent structure of any input hair-free image by deep encoding the input image into a latent vector of features. The locations of required hair are highlighted using white pixels on the input image. Then, these deep encoded features are used to reconstruct the synthetic highly realistic hair-occluded image. Besides, we explored using three loss functions including L1-norm, L2-norm and structural similarity index (SSIM) to maximize the image synthesis visual quality. For the evaluation of the generated samples, the t-SNE feature mapping and Bland-Altman test are used as visualization tools for the experimental results. The results show the superior performance of our proposed method compared to previous methods for hair synthesis with plausible colours and preserving the integrity of the lesion texture. The proposed method can be used to generate benchmarking datasets for comparing the performance of digital hair removal methods. The code is available online at: https://github.com/attiamohammed/realhair.
Collapse
Affiliation(s)
- Mohamed Attia
- Institute for Intelligent Systems Research and Innovation, Deakin University, Australia; Medical Research Institute, Alexandria University, Egypt.
| | - Mohammed Hossny
- Institute for Intelligent Systems Research and Innovation, Deakin University, Australia.
| | - Hailing Zhou
- Institute for Intelligent Systems Research and Innovation, Deakin University, Australia.
| | - Saeid Nahavandi
- Institute for Intelligent Systems Research and Innovation, Deakin University, Australia.
| | - Hamed Asadi
- School of Medicine, Melbourne University, Australia.
| | | |
Collapse
|
244
|
Lei B, Huang S, Li H, Li R, Bian C, Chou YH, Qin J, Zhou P, Gong X, Cheng JZ. Self-co-attention neural network for anatomy segmentation in whole breast ultrasound. Med Image Anal 2020; 64:101753. [DOI: 10.1016/j.media.2020.101753] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 05/27/2020] [Accepted: 06/06/2020] [Indexed: 11/25/2022]
|
245
|
Yang J, Wu X, Liang J, Sun X, Cheng MM, Rosin PL, Wang L. Self-Paced Balance Learning for Clinical Skin Disease Recognition. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2832-2846. [PMID: 31199274 DOI: 10.1109/tnnls.2019.2917524] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Class imbalance is a challenging problem in many classification tasks. It induces biased classification results for minority classes that contain less training samples than others. Most existing approaches aim to remedy the imbalanced number of instances among categories by resampling the majority and minority classes accordingly. However, the imbalanced level of difficulty of recognizing different categories is also crucial, especially for distinguishing samples with many classes. For example, in the task of clinical skin disease recognition, several rare diseases have a small number of training samples, but they are easy to diagnose because of their distinct visual properties. On the other hand, some common skin diseases, e.g., eczema, are hard to recognize due to the lack of special symptoms. To address this problem, we propose a self-paced balance learning (SPBL) algorithm in this paper. Specifically, we introduce a comprehensive metric termed the complexity of image category that is a combination of both sample number and recognition difficulty. First, the complexity is initialized using the model of the first pace, where the pace indicates one iteration in the self-paced learning paradigm. We then assign each class a penalty weight that is larger for more complex categories and smaller for easier ones, after which the curriculum is reconstructed by rearranging the training samples. Consequently, the model can iteratively learn discriminative representations via balancing the complexity in each pace. Experimental results on the SD-198 and SD-260 benchmark data sets demonstrate that the proposed SPBL algorithm performs favorably against the state-of-the-art methods. We also demonstrate the effectiveness of the SPBL algorithm's generalization capacity on various tasks, such as indoor scene image recognition and object classification.
Collapse
|
246
|
Skin lesion segmentation via generative adversarial networks with dual discriminators. Med Image Anal 2020; 64:101716. [DOI: 10.1016/j.media.2020.101716] [Citation(s) in RCA: 85] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2019] [Revised: 03/26/2020] [Accepted: 04/24/2020] [Indexed: 11/21/2022]
|
247
|
Shan P, Wang Y, Fu C, Song W, Chen J. Automatic skin lesion segmentation based on FC-DPN. Comput Biol Med 2020; 123:103762. [PMID: 32768035 DOI: 10.1016/j.compbiomed.2020.103762] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Revised: 03/31/2020] [Accepted: 04/10/2020] [Indexed: 10/23/2022]
Abstract
Automatic skin lesion segmentation in dermoscopy images is challenging due to the diversity of skin lesion characteristics, low contrast between normal skin and lesions, and the existence of many artefacts in the images. To meet these challenges, we propose a novel segmentation topology called FC-DPN, which is built upon a fully convolutional network (FCN) and dual path network (DPN). The DPN inherits the advantages of residual and densely connected paths, enabling effective feature re-usage and re-exploitation. We replace dense blocks in fully convolutional DenseNets (FC-DenseNets) with two kinds of sub-DPN blocks, namely, sub-DPN projection blocks and sub-DPN processing blocks. This framework enables FC-DPN to acquire more representative and discriminative features for more accurate segmentation. Many images in the original ISBI 2017 Skin Lesion Challenge test dataset are given the incorrect or inaccurate ground truths, and these ground truths have been revised. The revised test dataset is called the modified ISBI 2017 Skin Lesion Challenge test dataset. The proposed method achieves an average Dice coefficient of 88.13% and a Jaccard index of 80.02% on the modified ISBI 2017 Skin Lesion Challenge test dataset and 90.26% and 83.51%, respectively, on the PH2 dataset. Extensive experimental results on the two datasets demonstrate that the proposed method exhibits better performance than FC-DenseNets and other well-established segmentation algorithms.
Collapse
Affiliation(s)
- Pufang Shan
- School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, China
| | - Yiding Wang
- School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, China
| | - Chong Fu
- School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, 110819, China.
| | - Wei Song
- School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, China
| | - Junxin Chen
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110819, China
| |
Collapse
|
248
|
Zunair H, Ben Hamza A. Melanoma detection using adversarial training and deep transfer learning. Phys Med Biol 2020; 65:135005. [PMID: 32252036 DOI: 10.1088/1361-6560/ab86d3] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Skin lesion datasets consist predominantly of normal samples with only a small percentage of abnormal ones, giving rise to the class imbalance problem. Also, skin lesion images are largely similar in overall appearance owing to the low inter-class variability. In this paper, we propose a two-stage framework for automatic classification of skin lesion images using adversarial training and transfer learning toward melanoma detection. In the first stage, we leverage the inter-class variation of the data distribution for the task of conditional image synthesis by learning the inter-class mapping and synthesizing under-represented class samples from the over-represented ones using unpaired image-to-image translation. In the second stage, we train a deep convolutional neural network for skin lesion classification using the original training set combined with the newly synthesized under-represented class samples. The training of this classifier is carried out by minimizing the focal loss function, which assists the model in learning from hard examples, while down-weighting the easy ones. Experiments conducted on a dermatology image benchmark demonstrate the superiority of our proposed approach over several standard baseline methods, achieving significant performance improvements. Interestingly, we show through feature visualization and analysis that our method leads to context based lesion assessment that can reach an expert dermatologist level.
Collapse
|
249
|
Khanna A, Londhe ND, Gupta S, Semwal A. A deep Residual U-Net convolutional neural network for automated lung segmentation in computed tomography images. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.07.007] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
250
|
Panayides AS, Amini A, Filipovic ND, Sharma A, Tsaftaris SA, Young A, Foran D, Do N, Golemati S, Kurc T, Huang K, Nikita KS, Veasey BP, Zervakis M, Saltz JH, Pattichis CS. AI in Medical Imaging Informatics: Current Challenges and Future Directions. IEEE J Biomed Health Inform 2020; 24:1837-1857. [PMID: 32609615 PMCID: PMC8580417 DOI: 10.1109/jbhi.2020.2991043] [Citation(s) in RCA: 160] [Impact Index Per Article: 32.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper reviews state-of-the-art research solutions across the spectrum of medical imaging informatics, discusses clinical translation, and provides future directions for advancing clinical practice. More specifically, it summarizes advances in medical imaging acquisition technologies for different modalities, highlighting the necessity for efficient medical data management strategies in the context of AI in big healthcare data analytics. It then provides a synopsis of contemporary and emerging algorithmic methods for disease classification and organ/ tissue segmentation, focusing on AI and deep learning architectures that have already become the de facto approach. The clinical benefits of in-silico modelling advances linked with evolving 3D reconstruction and visualization applications are further documented. Concluding, integrative analytics approaches driven by associate research branches highlighted in this study promise to revolutionize imaging informatics as known today across the healthcare continuum for both radiology and digital pathology applications. The latter, is projected to enable informed, more accurate diagnosis, timely prognosis, and effective treatment planning, underpinning precision medicine.
Collapse
|