51
|
Tomihama RT, Camara JR, Kiang SC. Machine learning analysis of confounding variables of a convolutional neural network specific for abdominal aortic aneurysms. JVS Vasc Sci 2023. [DOI: 10.1016/j.jvssci.2022.11.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023] Open
|
52
|
Dey S, Mitra S, Chakraborty S, Mondal D, Nasipuri M, Das N. GC-EnC: A Copula based ensemble of CNNs for malignancy identification in breast histopathology and cytology images. Comput Biol Med 2023; 152:106329. [PMID: 36473342 DOI: 10.1016/j.compbiomed.2022.106329] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 10/25/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022]
Abstract
In the present work, we have explored the potential of Copula-based ensemble of CNNs(Convolutional Neural Networks) over individual classifiers for malignancy identification in histopathology and cytology images. The Copula-based model that integrates three best performing CNN architectures, namely, DenseNet-161/201, ResNet-101/34, InceptionNet-V3 is proposed. Also, the limitation of small dataset is circumvented using a Fuzzy template based data augmentation technique that intelligently selects multiple region of interests (ROIs) from an image. The proposed framework of data augmentation amalgamated with the ensemble technique showed a gratifying performance in malignancy prediction surpassing the individual CNN's performance on breast cytology and histopathology datasets. The proposed method has achieved accuracies of 84.37%, 97.32%, 91.67% on the JUCYT, BreakHis and BI datasets respectively. This automated technique will serve as a useful guide to the pathologist in delivering the appropriate diagnostic decision in reduced time and effort. The relevant codes of the proposed ensemble model are publicly available on GitHub.
Collapse
Affiliation(s)
- Soumyajyoti Dey
- Jadavpur University, Department of Computer Science & Engineering, Kolkata, West Bengal, India.
| | - Shyamali Mitra
- Jadavpur University, Department of Instrumentation & Electronics Engineering, Kolkata, West Bengal, India.
| | | | - Debashri Mondal
- Theism Medical Diagnostics Centre, Kolkata, West Bengal, India.
| | - Mita Nasipuri
- Jadavpur University, Department of Computer Science & Engineering, Kolkata, West Bengal, India.
| | - Nibaran Das
- Jadavpur University, Department of Computer Science & Engineering, Kolkata, West Bengal, India.
| |
Collapse
|
53
|
Seyer Cagatan A, Taiwo Mustapha M, Bagkur C, Sanlidag T, Ozsahin DU. An Alternative Diagnostic Method for C. neoformans: Preliminary Results of Deep-Learning Based Detection Model. Diagnostics (Basel) 2022; 13:diagnostics13010081. [PMID: 36611373 PMCID: PMC9818640 DOI: 10.3390/diagnostics13010081] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 11/23/2022] [Accepted: 12/16/2022] [Indexed: 12/29/2022] Open
Abstract
Cryptococcus neoformans is an opportunistic fungal pathogen with significant medical importance, especially in immunosuppressed patients. It is the causative agent of cryptococcosis. An estimated 220,000 annual cases of cryptococcal meningitis (CM) occur among people with HIV/AIDS globally, resulting in nearly 181,000 deaths. The gold standards for the diagnosis are either direct microscopic identification or fungal cultures. However, these diagnostic methods need special types of equipment and clinical expertise, and relatively low sensitivities have also been reported. This study aims to produce and implement a deep-learning approach to detect C. neoformans in patient samples. Therefore, we adopted the state-of-the-art VGG16 model, which determines the output information from a single image. Images that contain C. neoformans are designated positive, while others are designated negative throughout this section. Model training, validation, testing, and evaluation were conducted using frameworks and libraries. The state-of-the-art VGG16 model produced an accuracy and loss of 86.88% and 0.36203, respectively. Results prove that the deep learning framework VGG16 can be helpful as an alternative diagnostic method for the rapid and accurate identification of the C. neoformans, leading to early diagnosis and subsequent treatment. Further studies should include more and higher quality images to eliminate the limitations of the adopted deep learning model.
Collapse
Affiliation(s)
- Ayse Seyer Cagatan
- Department of Medical and Clinical Microbiology, Faculty of Medicine, Cyprus International University, TRNC Mersin 10, Nicosia 99010, Turkey
| | - Mubarak Taiwo Mustapha
- Operational Research Center in Healthcare, Near East University, TRNC Mersin 10, Nicosia 99138, Turkey
| | - Cemile Bagkur
- DESAM Research Institute, Near East University, TRNC Mersin 10, Nicosia 99138, Turkey
| | - Tamer Sanlidag
- DESAM Research Institute, Near East University, TRNC Mersin 10, Nicosia 99138, Turkey
| | - Dilber Uzun Ozsahin
- Operational Research Center in Healthcare, Near East University, TRNC Mersin 10, Nicosia 99138, Turkey
- Medical Diagnostic Imaging Department, College of Health Science, University of Sharjah, Sharjah 27272, United Arab Emirates
- Correspondence:
| |
Collapse
|
54
|
Hirokawa M, Niioka H, Suzuki A, Abe M, Arai Y, Nagahara H, Miyauchi A, Akamizu T. Application of deep learning as an ancillary diagnostic tool for thyroid FNA cytology. Cancer Cytopathol 2022; 131:217-225. [PMID: 36524985 DOI: 10.1002/cncy.22669] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 10/20/2022] [Accepted: 11/07/2022] [Indexed: 12/23/2022]
Abstract
BACKGROUND Several studies have used artificial intelligence (AI) to analyze cytology images, but AI has yet to be adopted in clinical practice. The objective of this study was to demonstrate the accuracy of AI-based image analysis for thyroid fine-needle aspiration cytology (FNAC) and to propose its application in clinical practice. METHODS In total, 148,395 microscopic images of FNAC were obtained from 393 thyroid nodules to train and validate the data, and EfficientNetV2-L was used as the image-classification model. The 35 nodules that were classified as atypia of undetermined significance (AUS) were predicted using AI training. RESULTS The precision-recall area under the curve (PR AUC) was >0.95, except for poorly differentiated thyroid carcinoma (PR AUC = 0.49) and medullary thyroid carcinoma (PR AUC = 0.91). Poorly differentiated thyroid carcinoma had the lowest recall (35.4%) and was difficult to distinguish from papillary thyroid carcinoma, medullary thyroid carcinoma, and follicular thyroid carcinoma. Follicular adenomas and follicular thyroid carcinomas were distinguished from each other by 86.7% and 93.9% recall, respectively. For two-dimensional mapping of the data using t-distributed stochastic neighbor embedding, the lymphomas, follicular adenomas, and anaplastic thyroid carcinomas were divided into three, two, and two groups, respectively. Analysis of the AUS nodules showed 94.7% sensitivity, 14.4% specificity, 56.3% positive predictive value, and 66.7% negative predictive value. CONCLUSIONS The authors developed an AI-based approach to analyze thyroid FNAC cases encountered in routine practice. This analysis could be useful for the clinical management of AUS and follicular neoplasm nodules (e.g., an online AI platform for thyroid cytology consultations).
Collapse
Affiliation(s)
| | - Hirohiko Niioka
- Institute for Datability Science Osaka University Suita Japan
| | - Ayana Suzuki
- Department of Diagnostic Pathology and Cytology Kuma Hospital Kobe Japan
| | - Masatoshi Abe
- Institute for Datability Science Osaka University Suita Japan
| | - Yusuke Arai
- Institute for Datability Science Osaka University Suita Japan
| | - Hajime Nagahara
- Institute for Datability Science Osaka University Suita Japan
| | | | | |
Collapse
|
55
|
Kani MAJM, Parvathy MS, Banu SM, Kareem MSA. Classification of skin lesion images using modified Inception V3 model with transfer learning and augmentation techniques. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-221386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
In this article, a methodological approach to classifying malignant melanoma in dermoscopy images is presented. Early treatment of skin cancer increases the patient’s survival rate. The classification of melanoma skin cancer in the early stages is decided by dermatologists to treat the patient appropriately. Dermatologists need more time to diagnose affected skin lesions due to high resemblance between melanoma and benign. In this paper, a deep learning based Computer-Aided Diagnosis (CAD) system is developed to accurately classify skin lesions with a high classification rate. A new architecture has been framed to classify the skin lesion diseases using the Inception v3 model as a baseline architecture. The extracted features from the Inception Net are then flattened and are given to the DenseNet block to extracts more fine grained features of the lesion disease. The International Skin Imaging Collaboration (ISIC) archive datasets contains 3307 dermoscopy images which includes both benign and malignant skin images. The dataset images are trained using the proposed architecture with the learning rate of 0.0001, batch size 64 using various optimizer. The performance of the proposed model has also been evaluated using confusion matrix and ROC-AUC curves. The experimental results show that the proposed model attains a highest accuracy rate of 91.29 % compared to other state-of-the-art methods like ResNet, VGG-16, DenseNet, MobileNet. A confusion matrix and ROC curve are used to evaluate the performance analysis of skin images. The classification accuracy, sensitivity, specificity, testing accuracy, and AUC values were obtained at 90.33%, 82.87%, 91.29%, 87.12%, and 87.40% .
Collapse
Affiliation(s)
- Mohamed Ali Jinna Mathina Kani
- Computer Science and Engineering, Sethu Institute of Technology Affiliated to Anna University, Pulloor, Kariyapatti, Tamilnadu, India
| | - Meenakshi Sundaram Parvathy
- Computer Science and Engineering, Sethu Institute of Technology Affiliated to Anna University, Pulloor, Kariyapatti, Tamilnadu, India
| | | | | |
Collapse
|
56
|
Xiao P, Pan Y, Cai F, Tu H, Liu J, Yang X, Liang H, Zou X, Yang L, Duan J, Xv L, Feng L, Liu Z, Qian Y, Meng Y, Du J, Mei X, Lou T, Yin X, Tan Z. A deep learning based framework for the classification of multi- class capsule gastroscope image in gastroenterologic diagnosis. Front Physiol 2022; 13:1060591. [PMID: 36467700 PMCID: PMC9716070 DOI: 10.3389/fphys.2022.1060591] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 11/07/2022] [Indexed: 07/30/2023] Open
Abstract
Purpose: The purpose of this paper is to develop a method to automatic classify capsule gastroscope image into three categories to prevent high-risk factors for carcinogenesis, such as atrophic gastritis (AG). The purpose of this research work is to develop a deep learning framework based on transfer learning to classify capsule gastroscope image into three categories: normal gastroscopic image, chronic erosive gastritis images, and ulcer gastric image. Method: In this research work, we proposed deep learning framework based on transfer learning to classify capsule gastroscope image into three categories: normal gastroscopic image, chronic erosive gastritis images, and ulcer gastric image. We used VGG- 16, ResNet-50, and Inception V3 pre-trained models, fine-tuned them and adjust hyperparameters according to our classification problem. Results: A dataset containing 380 images was collected for each capsule gastroscope image category, and divided into training set and test set in a ratio of 70%, and 30% respectively, and then based on the dataset, three methods, including as VGG- 16, ResNet-50, and Inception v3 are used. We achieved highest accuracy of 94.80% by using VGG- 16 to diagnose and classify capsule gastroscopic images into three categories: normal gastroscopic image, chronic erosive gastritis images, and ulcer gastric image. Our proposed approach classified capsule gastroscope image with respectable specificity and accuracy. Conclusion: The primary technique and industry standard for diagnosing and treating numerous stomach problems is gastroscopy. Capsule gastroscope is a new screening tool for gastric diseases. However, a number of elements, including image quality of capsule endoscopy, the doctors' experience and fatigue, limit its effectiveness. Early identification is necessary for high-risk factors for carcinogenesis, such as atrophic gastritis (AG). Our suggested framework will help prevent incorrect diagnoses brought on by low image quality, individual experience, and inadequate gastroscopy inspection coverage, among other factors. As a result, the suggested approach will raise the standard of gastroscopy. Deep learning has great potential in gastritis image classification for assisting with achieving accurate diagnoses after endoscopic procedures.
Collapse
Affiliation(s)
- Ping Xiao
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
- Department of Otorhinolaryngology Head and Neck Surgery, Shenzhen Children’s Hospital, Shenzhen, China
| | - Yuhang Pan
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
| | - Feiyue Cai
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
- Shenzhen Nanshan District General Practice Alliance, Shenzhen, China
| | - Haoran Tu
- Group International Division, Shenzhen Senior High School, Shenzhen, China
| | - Junru Liu
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
| | - Xuemei Yang
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
| | - Huanling Liang
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
| | - Xueqing Zou
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
| | - Li Yang
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
| | - Jueni Duan
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
| | - Long Xv
- Department of Gastroenterology and Hepatology, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
| | - Lijuan Feng
- Department of Gastroenterology and Hepatology, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
| | - Zhenyu Liu
- Department of Gastroenterology and Hepatology, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
| | - Yun Qian
- Department of Gastroenterology and Hepatology, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
| | - Yu Meng
- Department of Gastroenterology and Hepatology, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
| | - Jingfeng Du
- Department of Gastroenterology and Hepatology, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
| | - Xi Mei
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
| | - Ting Lou
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
| | - Xiaoxv Yin
- School of Public Health, Huazhong University of Science and Technology, Wuhan, China
| | - Zhen Tan
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
- Shenzhen Nanshan District General Practice Alliance, Shenzhen, China
| |
Collapse
|
57
|
Herbsthofer L, Tomberger M, Smolle MA, Prietl B, Pieber TR, López-García P. Cell2Grid: an efficient, spatial, and convolutional neural network-ready representation of cell segmentation data. J Med Imaging (Bellingham) 2022; 9:067501. [PMID: 36466076 PMCID: PMC9709305 DOI: 10.1117/1.jmi.9.6.067501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 11/03/2022] [Indexed: 12/03/2022] Open
Abstract
Purpose Cell segmentation algorithms are commonly used to analyze large histologic images as they facilitate interpretation, but on the other hand they complicate hypothesis-free spatial analysis. Therefore, many applications train convolutional neural networks (CNNs) on high-resolution images that resolve individual cells instead, but their practical application is severely limited by computational resources. In this work, we propose and investigate an alternative spatial data representation based on cell segmentation data for direct training of CNNs. Approach We introduce and analyze the properties of Cell2Grid, an algorithm that generates compact images from cell segmentation data by placing individual cells into a low-resolution grid and resolves possible cell conflicts. For evaluation, we present a case study on colorectal cancer relapse prediction using fluorescent multiplex immunohistochemistry images. Results We could generate Cell2Grid images at 5 - μ m resolution that were 100 times smaller than the original ones. Cell features, such as phenotype counts and nearest-neighbor cell distances, remain similar to those of original cell segmentation tables ( p < 0.0001 ). These images could be directly fed to a CNN for predicting colon cancer relapse. Our experiments showed that test set error rate was reduced by 25% compared with CNNs trained on images rescaled to 5 μ m with bilinear interpolation. Compared with images at 1 - μ m resolution (bilinear rescaling), our method reduced CNN training time by 85%. Conclusions Cell2Grid is an efficient spatial data representation algorithm that enables the use of conventional CNNs on cell segmentation data. Its cell-based representation additionally opens a door for simplified model interpretation and synthetic image generation.
Collapse
Affiliation(s)
- Laurin Herbsthofer
- CBmed, Center for Biomarker Research in Medicine GmbH, Graz, Austria
- BioTechMed, Graz, Austria
| | - Martina Tomberger
- CBmed, Center for Biomarker Research in Medicine GmbH, Graz, Austria
| | - Maria A. Smolle
- Medical University of Graz, Department of Orthopaedics and Trauma, Graz, Austria
| | - Barbara Prietl
- CBmed, Center for Biomarker Research in Medicine GmbH, Graz, Austria
- BioTechMed, Graz, Austria
- Medical University of Graz, Division of Endocrinology and Diabetology, Graz, Austria
| | - Thomas R. Pieber
- CBmed, Center for Biomarker Research in Medicine GmbH, Graz, Austria
- BioTechMed, Graz, Austria
- Medical University of Graz, Division of Endocrinology and Diabetology, Graz, Austria
- Health Institute for Biomedicine and Health Sciences, Joanneum Research Forschungsgesellschaft mbH, Graz, Austria
| | | |
Collapse
|
58
|
Ahmed AA, Abouzid M, Kaczmarek E. Deep Learning Approaches in Histopathology. Cancers (Basel) 2022; 14:5264. [PMID: 36358683 PMCID: PMC9654172 DOI: 10.3390/cancers14215264] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 10/10/2022] [Accepted: 10/24/2022] [Indexed: 10/06/2023] Open
Abstract
The revolution of artificial intelligence and its impacts on our daily life has led to tremendous interest in the field and its related subtypes: machine learning and deep learning. Scientists and developers have designed machine learning- and deep learning-based algorithms to perform various tasks related to tumor pathologies, such as tumor detection, classification, grading with variant stages, diagnostic forecasting, recognition of pathological attributes, pathogenesis, and genomic mutations. Pathologists are interested in artificial intelligence to improve the diagnosis precision impartiality and to minimize the workload combined with the time consumed, which affects the accuracy of the decision taken. Regrettably, there are already certain obstacles to overcome connected to artificial intelligence deployments, such as the applicability and validation of algorithms and computational technologies, in addition to the ability to train pathologists and doctors to use these machines and their willingness to accept the results. This review paper provides a survey of how machine learning and deep learning methods could be implemented into health care providers' routine tasks and the obstacles and opportunities for artificial intelligence application in tumor morphology.
Collapse
Affiliation(s)
- Alhassan Ali Ahmed
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 60-812 Poznan, Poland
- Doctoral School, Poznan University of Medical Sciences, 60-812 Poznan, Poland
| | - Mohamed Abouzid
- Doctoral School, Poznan University of Medical Sciences, 60-812 Poznan, Poland
- Department of Physical Pharmacy and Pharmacokinetics, Faculty of Pharmacy, Poznan University of Medical Sciences, Rokietnicka 3 St., 60-806 Poznan, Poland
| | - Elżbieta Kaczmarek
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 60-812 Poznan, Poland
| |
Collapse
|
59
|
Sourav MSU, Wang H. Intelligent Identification of Jute Pests Based on Transfer Learning and Deep Convolutional Neural Networks. Neural Process Lett 2022; 55:1-18. [PMID: 35990859 PMCID: PMC9376051 DOI: 10.1007/s11063-022-10978-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/20/2022] [Indexed: 11/10/2022]
Abstract
Pest attacks pose a substantial threat to jute production and other significant crop plants. Jute farmers in Bangladesh generally distinguish between different pests that appear to be the same using their eyes and expertise, which isn't always accurate. We developed an intelligent model for jute pests identification based on transfer learning (TL) and deep convolutional neural networks (DCNN) to solve this practical problem. The proposed DCNN model can realize fast and accurate automatic identification of jute pests based on photographs. Specifically, the VGG19 CNN model was trained by TL on the ImageNet database. A well-structured image dataset of four dominant jute pests is also established. Our model shows a final accuracy of 95.86% on the four most vital jute pest classes. The model's performance is further demonstrated by the precision, recall, F1-score, and confusion matrix results. The proposed model is integrated into Android and IOS applications for practical uses.
Collapse
Affiliation(s)
- Md Sakib Ullah Sourav
- School of Management Science and Engineering, Shandong University of Finance and Economics, Jinan, China
| | - Huidong Wang
- School of Management Science and Engineering, Shandong University of Finance and Economics, Jinan, China
| |
Collapse
|
60
|
Auxiliary classification of cervical cells based on multi-domain hybrid deep learning framework. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
61
|
Thakur N, Alam MR, Abdul-Ghafar J, Chong Y. Recent Application of Artificial Intelligence in Non-Gynecological Cancer Cytopathology: A Systematic Review. Cancers (Basel) 2022; 14:3529. [PMID: 35884593 PMCID: PMC9316753 DOI: 10.3390/cancers14143529] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 07/12/2022] [Accepted: 07/15/2022] [Indexed: 11/27/2022] Open
Abstract
State-of-the-art artificial intelligence (AI) has recently gained considerable interest in the healthcare sector and has provided solutions to problems through automated diagnosis. Cytological examination is a crucial step in the initial diagnosis of cancer, although it shows limited diagnostic efficacy. Recently, AI applications in the processing of cytopathological images have shown promising results despite the elementary level of the technology. Here, we performed a systematic review with a quantitative analysis of recent AI applications in non-gynecological (non-GYN) cancer cytology to understand the current technical status. We searched the major online databases, including MEDLINE, Cochrane Library, and EMBASE, for relevant English articles published from January 2010 to January 2021. The searched query terms were: "artificial intelligence", "image processing", "deep learning", "cytopathology", and "fine-needle aspiration cytology." Out of 17,000 studies, only 26 studies (26 models) were included in the full-text review, whereas 13 studies were included for quantitative analysis. There were eight classes of AI models treated of according to target organs: thyroid (n = 11, 39%), urinary bladder (n = 6, 21%), lung (n = 4, 14%), breast (n = 2, 7%), pleural effusion (n = 2, 7%), ovary (n = 1, 4%), pancreas (n = 1, 4%), and prostate (n = 1, 4). Most of the studies focused on classification and segmentation tasks. Although most of the studies showed impressive results, the sizes of the training and validation datasets were limited. Overall, AI is also promising for non-GYN cancer cytopathology analysis, such as pathology or gynecological cytology. However, the lack of well-annotated, large-scale datasets with Z-stacking and external cross-validation was the major limitation found across all studies. Future studies with larger datasets with high-quality annotations and external validation are required.
Collapse
Affiliation(s)
| | | | | | - Yosep Chong
- Department of Hospital Pathology, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea; (N.T.); (M.R.A.); (J.A.-G.)
| |
Collapse
|
62
|
Abbas Q. A hybrid transfer learning-based architecture for recognition of medical imaging modalities for healthcare experts. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-212171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Due to the wide range of diseases and imaging modalities, a retrieving system is a challenging task to access the corresponding clinical cases from a large medical repository on time. Several computer-aided systems (CADx) are developed to recognize medical imaging modalities (MIM) based on various standard machine learning (SML) and advanced deep learning (DL) algorithms. Pre-trained models like convolutional neural networks (CNN) are used in the past as a transfer learning (TL) architecture. However, it is a challenging task to use these pre-trained models for some unseen datasets with a different domain of features. To classify different medical images, the relevant features with a robust classifier are needed and still, it is unsolved task due to MIM-based features. In this paper, a hybrid MIM-based classification system is developed by integrating the pre-trained VGG-19 and ResNet34 models into the original CNN model. Next, the MIM-DTL model is fine-tuned by updating the weights of new layers as well as weights of original CNN layers. The performance of MIM-DTL is compared with state-of-the-art systems based on cancer imaging archive (TCIA), Kvasir and lower extremity radiographs (LERA) datasets in terms of statistical measures such as accuracy (ACC), sensitivity (SE) and specificity (SP). On average, the MIM-DTL model achieved 99% of ACC, SE of 97.5% and SP of 98% along with smaller epochs compare to other TL. The experimental results show that the MIM-DTL model is outperformed to recognize medical imaging modalities and helps the healthcare experts to identify relevant diseases.
Collapse
Affiliation(s)
- Qaisar Abbas
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| |
Collapse
|
63
|
Guanglong B, Qian G. Correlation Analysis between the Emotion and Aesthetics for Chinese Classical Garden Design Based on Deep Transfer Learning. JOURNAL OF ENVIRONMENTAL AND PUBLIC HEALTH 2022; 2022:1828782. [PMID: 35855813 PMCID: PMC9288283 DOI: 10.1155/2022/1828782] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 06/10/2022] [Accepted: 06/14/2022] [Indexed: 11/18/2022]
Abstract
Garden design with healthy psychological characteristics is a design method that mines positive psychological expressions and converts them into garden design elements. Chinese classical gardens are cultural heritage of China. Studying the beauty of space in classical gardens is of great significance to inheriting traditional culture, traditional art, and traditional aesthetics. At present, the research hotspots of garden design with healthy psychological characteristics mainly focus on the construction of relevant research theories and methods with the help of various intelligent tools. In this study, we propose a deep learning-based end-to-end model to recognize the positive psychological design of a Chinese classical garden. The model is designed based on Inception V3 that is proposed by Google. The innovation lies in that transfer learning which is integrated into Inception V3 to improve the generalization ability. Also, it is not necessary to encode the characteristics of the garden design style due to the end-to-end structure used in our proposed model. We design a positive psychological characteristics classification task to recognize high aesthetic feeling and low aesthetic feeling of rockery design. Experimental results indicate that our proposed model wins the best performance compared with other comparison models.
Collapse
Affiliation(s)
- Bao Guanglong
- College of Fine Arts and Design, YangZhou University, Yangzhou 225000, Jiangsu, China
| | - Gao Qian
- College of Fine Arts and Design, YangZhou University, Yangzhou 225000, Jiangsu, China
| |
Collapse
|
64
|
Gouda W, Sama NU, Al-Waakid G, Humayun M, Jhanjhi NZ. Detection of Skin Cancer Based on Skin Lesion Images Using Deep Learning. Healthcare (Basel) 2022; 10:healthcare10071183. [PMID: 35885710 PMCID: PMC9324455 DOI: 10.3390/healthcare10071183] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 06/13/2022] [Accepted: 06/15/2022] [Indexed: 12/12/2022] Open
Abstract
An increasing number of genetic and metabolic anomalies have been determined to lead to cancer, generally fatal. Cancerous cells may spread to any body part, where they can be life-threatening. Skin cancer is one of the most common types of cancer, and its frequency is increasing worldwide. The main subtypes of skin cancer are squamous and basal cell carcinomas, and melanoma, which is clinically aggressive and responsible for most deaths. Therefore, skin cancer screening is necessary. One of the best methods to accurately and swiftly identify skin cancer is using deep learning (DL). In this research, the deep learning method convolution neural network (CNN) was used to detect the two primary types of tumors, malignant and benign, using the ISIC2018 dataset. This dataset comprises 3533 skin lesions, including benign, malignant, nonmelanocytic, and melanocytic tumors. Using ESRGAN, the photos were first retouched and improved. The photos were augmented, normalized, and resized during the preprocessing step. Skin lesion photos could be classified using a CNN method based on an aggregate of results obtained after many repetitions. Then, multiple transfer learning models, such as Resnet50, InceptionV3, and Inception Resnet, were used for fine-tuning. In addition to experimenting with several models (the designed CNN, Resnet50, InceptionV3, and Inception Resnet), this study’s innovation and contribution are the use of ESRGAN as a preprocessing step. Our designed model showed results comparable to the pretrained model. Simulations using the ISIC 2018 skin lesion dataset showed that the suggested strategy was successful. An 83.2% accuracy rate was achieved by the CNN, in comparison to the Resnet50 (83.7%), InceptionV3 (85.8%), and Inception Resnet (84%) models.
Collapse
Affiliation(s)
- Walaa Gouda
- Department of Computer Engineering and Network, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Al Jouf, Saudi Arabia
- Electrical Engineering Department, Faculty of Engineering at Shoubra, Benha University, Cairo 4272077, Egypt
- Correspondence: (W.G.); (M.H.)
| | - Najm Us Sama
- Faculty of Computer Science and Information Technology, Universiti Malaysia Sarawak, Kota Samarahan 94300, Malaysia;
| | - Ghada Al-Waakid
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Al Jouf, Saudi Arabia;
| | - Mamoona Humayun
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Al Jouf, Saudi Arabia
- Correspondence: (W.G.); (M.H.)
| | - Noor Zaman Jhanjhi
- School of Computer Science and Engineering (SCE), Taylor’s University, Subang Jaya 47500, Malaysia;
| |
Collapse
|
65
|
Smith DL, Nguyen LV, Ottaway DJ, Cabral TD, Fujiwara E, Cordeiro CMB, Warren-Smith SC. Machine learning for sensing with a multimode exposed core fiber specklegram sensor. OPTICS EXPRESS 2022; 30:10443-10455. [PMID: 35473011 DOI: 10.1364/oe.443932] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 11/28/2021] [Indexed: 06/14/2023]
Abstract
Fiber specklegram sensors (FSSs) traditionally use statistical methods to analyze specklegrams obtained from fibers for sensing purposes, but can suffer from limitations such as vulnerability to noise and lack of dynamic range. In this paper we demonstrate that deep learning improves the analysis of specklegrams for sensing, which we show here for both air temperature and water immersion length measurements. Two deep neural networks (DNNs); a convolutional neural network and a multi-layer perceptron network, are used and compared to a traditional correlation technique on data obtained from a multimode fiber exposed-core fiber. The ability for the DNNs to be trained against a random noise source such as specklegram translations is also demonstrated.
Collapse
|
66
|
Tiwari S, Jain A. A lightweight capsule network architecture for detection of COVID-19 from lung CT scans. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2022; 32:419-434. [PMID: 35465213 PMCID: PMC9015631 DOI: 10.1002/ima.22706] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 11/22/2021] [Accepted: 01/04/2022] [Indexed: 05/28/2023]
Abstract
COVID-19, a novel coronavirus, has spread quickly and produced a worldwide respiratory ailment outbreak. There is a need for large-scale screening to prevent the spreading of the disease. When compared with the reverse transcription polymerase chain reaction (RT-PCR) test, computed tomography (CT) is far more consistent, concrete, and precise in detecting COVID-19 patients through clinical diagnosis. An architecture based on deep learning has been proposed by integrating a capsule network with different variants of convolution neural networks. DenseNet, ResNet, VGGNet, and MobileNet are utilized with CapsNet to detect COVID-19 cases using lung computed tomography scans. It has found that all the four models are providing adequate accuracy, among which the VGGCapsNet, DenseCapsNet, and MobileCapsNet models have gained the highest accuracy of 99%. An Android-based app can be deployed using MobileCapsNet model to detect COVID-19 as it is a lightweight model and best suited for handheld devices like a mobile.
Collapse
Affiliation(s)
- Shamik Tiwari
- School of Computer ScienceUniversity of Petroleum and Energy StudiesDehradunUttarakhandIndia
| | - Anurag Jain
- School of Computer ScienceUniversity of Petroleum and Energy StudiesDehradunUttarakhandIndia
| |
Collapse
|
67
|
Machine Learning and Deep Learning Algorithms for Skin Cancer Classification from Dermoscopic Images. Bioengineering (Basel) 2022; 9:bioengineering9030097. [PMID: 35324786 PMCID: PMC8945332 DOI: 10.3390/bioengineering9030097] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 02/19/2022] [Accepted: 02/23/2022] [Indexed: 11/17/2022] Open
Abstract
We carry out a critical assessment of machine learning and deep learning models for the classification of skin tumors. Machine learning (ML) algorithms tested in this work include logistic regression, linear discriminant analysis, k-nearest neighbors classifier, decision tree classifier and Gaussian naive Bayes, while deep learning (DL) models employed are either based on a custom Convolutional Neural Network model, or leverage transfer learning via the use of pre-trained models (VGG16, Xception and ResNet50). We find that DL models, with accuracies up to 0.88, all outperform ML models. ML models exhibit accuracies below 0.72, which can be increased to up to 0.75 with ensemble learning. To further assess the performance of DL models, we test them on a larger and more imbalanced dataset. Metrics, such as the F-score and accuracy, indicate that, after fine-tuning, pre-trained models perform extremely well for skin tumor classification. This is most notably the case for VGG16, which exhibits an F-score of 0.88 and an accuracy of 0.88 on the smaller database, and metrics of 0.70 and 0.88, respectively, on the larger database.
Collapse
|
68
|
Toğaçar M. Detection of retinopathy disease using morphological gradient and segmentation approaches in fundus images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 214:106579. [PMID: 34896689 DOI: 10.1016/j.cmpb.2021.106579] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Revised: 12/01/2021] [Accepted: 12/03/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Diabetes-related cases can cause glaucoma, cataracts, optic neuritis, paralysis of the eye muscles, or various retinal damages over time. Diabetic retinopathy is the most common form of blindness that occurs with diabetes. Diabetic retinopathy is a disease that occurs when the blood vessels in the retina of the eye become damaged, leading to loss of vision in advanced stages. This disease can occur in any diabetic patient, and the most important factor in treating the disease is early diagnosis. Nowadays, deep learning models and machine learning methods, which are open to technological developments, are already used in early diagnosis systems. In this study, two publicly available datasets were used. The datasets consist of five types according to the severity of diabetic retinopathy. The objectives of the proposed approach in diabetic retinopathy detection are to positively contribute to the performance of CNN models by processing fundus images through preprocessing steps (morphological gradient and segmentation approaches). The other goal is to detect efficient sets from type-based activation sets obtained from CNN models using Atom Search Optimization method and increase the classification success. METHODS The proposed approach consists of three steps. In the first step, the Morphological Gradient method is used to prevent parasitism in each image, and the ocular vessels in fundus images are extracted using the segmentation method. In the second step, the datasets are trained with transfer learning models and the activations for each class type in the last fully connected layers of these models are extracted. In the last step, the Atom Search optimization method is used, and the most dominant activation class is selected from the extracted activations on a class basis. RESULTS When classified by the severity of diabetic retinopathy, an overall accuracy of 99.59% was achieved for dataset #1 and 99.81% for dataset #2. CONCLUSIONS In this study, it was found that the overall accuracy achieved with the proposed approach increased. To achieve this increase, the application of preprocessing steps and the selection of the dominant activation sets from the deep learning models were implemented using the Atom Search optimization method.
Collapse
Affiliation(s)
- Mesut Toğaçar
- Computer Technologies Department, Technical Sciences Vocational School, Fırat University, Elazığ, Turkey.
| |
Collapse
|
69
|
Chandra S, Gourisaria MK, Gm H, Konar D, Gao X, Wang T, Xu M. Prolificacy Assessment of Spermatozoan via State-of-the-Art Deep Learning Frameworks. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2022; 10:13715-13727. [PMID: 35291304 PMCID: PMC8920051 DOI: 10.1109/access.2022.3146334] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Childlessness or infertility among couples has become a global health concern. Due to the rise in infertility, couples are looking for medical supports to attain reproduction. This paper deals with diagnosing infertility among men and the major factor in diagnosing infertility among men is the Sperm Morphology Analysis (SMA). In this manuscript, we explore establishing deep learning frameworks to automate the classification problem in the fertilization of sperm cells. We investigate the performance of multiple state-of-the-art deep neural networks on the MHSMA dataset. The experimental results demonstrate that the deep learning-based framework outperforms human experts on sperm classification in terms of accuracy, throughput and reliability. We further analyse the sperm cell data by visualizing the feature activations of the deep learning models, providing a new perspective to understand the data. Finally, a comprehensive analysis is also demonstrated on the experimental results obtained and attributing them to pertinent reasons.
Collapse
Affiliation(s)
- Satish Chandra
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar, Odisha 751024, India
| | | | - Harshvardhan Gm
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar, Odisha 751024, India
| | - Debanjan Konar
- CASUS-Center for Advanced Systems Understanding, Helmholtz-Zentrum Dresden-Rossendorf (HZDR), 02826 Görlitz, Germany
| | - Xin Gao
- Computer, Electrical and Mathematical Science and Engineering Division, King Abdullah University of Science and Technology, Thuwal 23955, Saudi Arabia
| | - Tianyang Wang
- Department of Computer Science & Information Technology, Austin Peay State University, Clarksville, TN 37044, USA
| | - Min Xu
- Computational Biology Department, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| |
Collapse
|
70
|
Zhu YC, Jin PF, Bao J, Jiang Q, Wang X. Thyroid ultrasound image classification using a convolutional neural network. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:1526. [PMID: 34790732 PMCID: PMC8576712 DOI: 10.21037/atm-21-4328] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 09/16/2021] [Indexed: 11/06/2022]
Abstract
Background Ultrasound (US) is widely used in the clinical diagnosis of thyroid nodules. Artificial intelligence-powered US is becoming an important issue in the research community. This study aimed to develop an improved deep learning model-based algorithm to classify benign and malignant thyroid nodules (TNs) using thyroid US images. Methods In total, 592 patients with 600 TNs were included in the internal training, validation, and testing data set; 187 patients with 200 TNs were recruited for the external test data set. We developed a Visual Geometry Group (VGG)-16T model, based on the VGG-16 architecture, but with additional batch normalization (BN) and dropout layers in addition to the fully connected layers. We conducted a 10-fold cross-validation to analyze the performance of the VGG-16T model using a data set of gray-scale US images from 5 different brands of US machines. Results For the internal data set, the VGG-16T model had 87.43% sensitivity, 85.43% specificity, and 86.43% accuracy. For the external data set, the VGG-16T model achieved an area under the curve (AUC) of 0.829 [95% confidence interval (CI): 0.770–0.879], a radiologist with 15 years’ working experience achieved an AUC of 0.705 (95% CI: 0.659–0.801), a radiologist with 10 years’ experience achieved an AUC of 0.725 (95% CI: 0.653–0.797), and a radiologist with 5 years’ experience achieved an AUC of 0.660 (95% CI: 0.584–0.736). Conclusions The VGG-16T model had high specificity, sensitivity, and accuracy in differentiating between malignant and benign TNs. Its diagnostic performance was superior to that of experienced radiologists. Thus, the proposed improved deep-learning model can assist radiologists to diagnose thyroid cancer.
Collapse
Affiliation(s)
- Yi-Cheng Zhu
- First Clinical Medical College, Soochow University, Suzhou, China
| | - Peng-Fei Jin
- Department of Radiology, First Affiliated Hospital of Soochow University, Suzhou, China
| | - Jie Bao
- Department of Radiology, First Affiliated Hospital of Soochow University, Suzhou, China
| | - Quan Jiang
- Pudong New Area People's Hospital Affiliated to Shanghai University of Medicine and Health Sciences, Shanghai, China
| | - Ximing Wang
- Department of Radiology, First Affiliated Hospital of Soochow University, Suzhou, China
| |
Collapse
|
71
|
Evaluation of Non-Classical Decision-Making Methods in Self Driving Cars: Pedestrian Detection Testing on Cluster of Images with Different Luminance Conditions. ENERGIES 2021. [DOI: 10.3390/en14217172] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Self-driving cars, i.e., fully automated cars, will spread in the upcoming two decades, according to the representatives of automotive industries; owing to technological breakthroughs in the fourth industrial revolution, as the introduction of deep learning has completely changed the concept of automation. There is considerable research being conducted regarding object detection systems, for instance, lane, pedestrian, or signal detection. This paper specifically focuses on pedestrian detection while the car is moving on the road, where speed and environmental conditions affect visibility. To explore the environmental conditions, a pedestrian custom dataset based on Common Object in Context (COCO) is used. The images are manipulated with the inverse gamma correction method, in which pixel values are changed to make a sequence of bright and dark images. The gamma correction method is directly related to luminance intensity. This paper presents a flexible, simple detection system called Mask R-CNN, which works on top of the Faster R-CNN (Region Based Convolutional Neural Network) model. Mask R-CNN uses one extra feature instance segmentation in addition to two available features in the Faster R-CNN, called object recognition. The performance of the Mask R-CNN models is checked by using different Convolutional Neural Network (CNN) models as a backbone. This approach might help future work, especially when dealing with different lighting conditions.
Collapse
|
72
|
Rai HM, Chatterjee K. 2D MRI image analysis and brain tumor detection using deep learning CNN model LeU-Net. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:36111-36141. [DOI: 10.1007/s11042-021-11504-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 07/03/2021] [Accepted: 08/19/2021] [Indexed: 08/08/2023]
|
73
|
Dong R, Wang J, Weng S, Yuan H, Yang L. Field determination of hazardous chemicals in public security by using a hand-held Raman spectrometer and a deep architecture-search network. SPECTROCHIMICA ACTA. PART A, MOLECULAR AND BIOMOLECULAR SPECTROSCOPY 2021; 258:119871. [PMID: 33957446 DOI: 10.1016/j.saa.2021.119871] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 04/08/2021] [Accepted: 04/21/2021] [Indexed: 06/12/2023]
Abstract
With the advanced development of miniaturization and integration of instruments, Raman spectroscopy (RS) has demonstrated its great significance because of its non-invasive property and fingerprint identification ability, and extended its applications in public security, especially for hazardous chemicals. However, the fast and accurate RS analysis of hazardous chemicals in field test by non-professionals is still challenging due to the lack of an effective and timely spectral-based chemical-discriminating solution. In this study, a platform was developed for the field determination of hazardous chemicals in public security by using a hand-held Raman spectrometer and a deep architecture-search network (DASN) incorporated into a cloud server. With the Raman spectra of 300 chemicals, DASN stands out with identification accuracy of 100% and outweighs other machine learning and deep learning methods. The network feature maps for the spectra of methamphetamine and ketamine focus on the main peaks of 1001 and 652 cm-1, which indicates the powerful feature extraction capability of DASN. Its receiver operating characteristic (ROC) curve completely encloses the other models, and the area under the curve is up to 1, implying excellent robustness. With the well-built platform combining RS, DASN, and cloud server, one test process including Raman measurement and identification can be performed in tens of seconds. Hence, the developed platform is simple, fast, accurate, and could be considered as a promising tool for hazardous chemical identification in public security on the scene.
Collapse
Affiliation(s)
- Ronglu Dong
- Institute of Health and Medical Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China; University of Science and Technology of China, Hefei 230026, Anhui, China
| | - Jinghong Wang
- National Engineering Research Center for Agro-Ecological Big Data Analysis & Application, Anhui University, Hefei 230601, China
| | - Shizhuang Weng
- National Engineering Research Center for Agro-Ecological Big Data Analysis & Application, Anhui University, Hefei 230601, China.
| | - Hecai Yuan
- National Engineering Research Center for Agro-Ecological Big Data Analysis & Application, Anhui University, Hefei 230601, China
| | - Liangbao Yang
- Institute of Health and Medical Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China.
| |
Collapse
|
74
|
Ali MS, Miah MS, Haque J, Rahman MM, Islam MK. An enhanced technique of skin cancer classification using deep convolutional neural network with transfer learning models. MACHINE LEARNING WITH APPLICATIONS 2021. [DOI: 10.1016/j.mlwa.2021.100036] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
|
75
|
Hou J, Gao T. Explainable DCNN based chest X-ray image analysis and classification for COVID-19 pneumonia detection. Sci Rep 2021; 11:16071. [PMID: 34373554 PMCID: PMC8352869 DOI: 10.1038/s41598-021-95680-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Accepted: 07/28/2021] [Indexed: 02/07/2023] Open
Abstract
To speed up the discovery of COVID-19 disease mechanisms by X-ray images, this research developed a new diagnosis platform using a deep convolutional neural network (DCNN) that is able to assist radiologists with diagnosis by distinguishing COVID-19 pneumonia from non-COVID-19 pneumonia in patients based on chest X-ray classification and analysis. Such a tool can save time in interpreting chest X-rays and increase the accuracy and thereby enhance our medical capacity for the detection and diagnosis of COVID-19. The explainable method is also used in the DCNN to select instances of the X-ray dataset images to explain the behavior of training-learning models to achieve higher prediction accuracy. The average accuracy of our method is above 96%, which can replace manual reading and has the potential to be applied to large-scale rapid screening of COVID-9 for widely use cases.
Collapse
Affiliation(s)
- Jie Hou
- School of Biomedical Engineering, Guangdong Medical University, Dongguan, Guangdong, China
| | - Terry Gao
- Counties Manukau District Health Board, Auckland, 1640, New Zealand.
| |
Collapse
|
76
|
Enhancing of dataset using DeepDream, fuzzy color image enhancement and hypercolumn techniques to detection of the Alzheimer's disease stages by deep learning model. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05758-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
77
|
Effect of Interventional Therapy on Iliac Venous Compression Syndrome Evaluated and Diagnosed by Artificial Intelligence Algorithm-Based Ultrasound Images. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:5755671. [PMID: 34336159 PMCID: PMC8321720 DOI: 10.1155/2021/5755671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 07/06/2021] [Accepted: 07/16/2021] [Indexed: 11/17/2022]
Abstract
In order to explore the efficacy of using artificial intelligence (AI) algorithm-based ultrasound images to diagnose iliac vein compression syndrome (IVCS) and assist clinicians in the diagnosis of diseases, the characteristics of vein imaging in patients with IVCS were summarized. After ultrasound image acquisition, the image data were preprocessed to construct a deep learning model to realize the position detection of venous compression and the recognition of benign and malignant lesions. In addition, a dataset was built for model evaluation. The data came from patients with thrombotic chronic venous disease (CVD) and deep vein thrombosis (DVT) in hospital. The image feature group of IVCS extracted by cavity convolution was the artificial intelligence algorithm imaging group, and the ultrasound images were directly taken as the control group without processing. Digital subtraction angiography (DSA) was performed to check the patient's veins one week in advance. Then, the patients were rolled into the AI algorithm imaging group and control group, and the correlation between May-Thurner syndrome (MTS) and AI algorithm imaging was analyzed based on DSA and ultrasound results. Satisfaction of intestinal venous stenosis (or occlusion) or formation of collateral circulation was used as a diagnostic index for MTS. Ultrasound showed that the AI algorithm imaging group had a higher percentage of good treatment effects than that of the control group. The call-up rate of the DMRF-convolutional neural network (CNN), precision, and accuracy were all superior to those of the control group. In addition, the degree of venous swelling of patients in the artificial intelligence algorithm imaging group was weak, the degree of pain relief was high after treatment, and the difference between the artificial intelligence algorithm imaging group and control group was statistically considerable (p < 0.005). Through grouped experiments, it was found that the construction of the AI imaging model was effective for the detection and recognition of lower extremity vein lesions in ultrasound images. To sum up, the ultrasound image evaluation and analysis using AI algorithm during MTS treatment was accurate and efficient, which laid a good foundation for future research, diagnosis, and treatment.
Collapse
|
78
|
Liu Y, Han L, Wang H, Yin B. Classification of papillary thyroid carcinoma histological images based on deep learning. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-210100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Papillary thyroid carcinoma (PTC) is a common carcinoma in thyroid. As many benign thyroid nodules have the papillary structure which could easily be confused with PTC in morphology. Thus, pathologists have to take a lot of time on differential diagnosis of PTC besides personal diagnostic experience and there is no doubt that it is subjective and difficult to obtain consistency among observers. To address this issue, we applied deep learning to the differential diagnosis of PTC and proposed a histological image classification method for PTC based on the Inception Residual convolutional neural network (IRCNN) and support vector machine (SVM). First, in order to expand the dataset and solve the problem of histological image color inconsistency, a pre-processing module was constructed that included color transfer and mirror transform. Then, to alleviate overfitting of the deep learning model, we optimized the convolution neural network by combining Inception Network and Residual Network to extract image features. Finally, the SVM was trained via image features extracted by IRCNN to perform the classification task. Experimental results show effectiveness of the proposed method in the classification of PTC histological images.
Collapse
Affiliation(s)
- Yaning Liu
- College of Information Science and Engineering, Ocean University of China, Qingdao, China
| | - Lin Han
- School of Information and Control Engineering, Qingdao University of Technology, Qingdao, China
| | - Hexiang Wang
- Department of Pathology, Qingdao Hospital of Traditional Chinese Medicine, Qingdao, China
| | - Bo Yin
- College of Information Science and Engineering, Ocean University of China, Qingdao, China
| |
Collapse
|
79
|
Liu JL, Li SH, Cai YM, Lan DP, Lu YF, Liao W, Ying SC, Zhao ZH. Automated Radiographic Evaluation of Adenoid Hypertrophy Based on VGG-Lite. J Dent Res 2021; 100:1337-1343. [PMID: 33913367 DOI: 10.1177/00220345211009474] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Adenoid hypertrophy is a pathological hyperplasia of the adenoids, which may cause snoring and apnea, as well as impede breathing during sleep. The lateral cephalogram is commonly used by dentists to screen for adenoid hypertrophy, but it is tedious and time-consuming to measure the ratio of adenoid width to nasopharyngeal width for adenoid assessment. The purpose of this study was to develop a screening tool to automatically evaluate adenoid hypertrophy from lateral cephalograms using deep learning. We proposed the deep learning model VGG-Lite, using the largest data set (1,023 X-ray images) yet described to support the automatic detection of adenoid hypertrophy. We demonstrated that our model was able to automatically evaluate adenoid hypertrophy with a sensitivity of 0.898, a specificity of 0.882, positive predictive value of 0.880, negative predictive value of 0.900, and F1 score of 0.889. The comparison of model-only and expert-only detection performance showed that the fully automatic method (0.07 min) was about 522 times faster than the human expert (36.6 min). Comparison of human experts with or without deep learning assistance showed that model-assisted human experts spent an average of 23.3 min to evaluate adenoid hypertrophy using 100 radiographs, compared to an average of 36.6 min using an entirely manual procedure. We therefore concluded that deep learning could improve the accuracy, speed, and efficiency of evaluating adenoid hypertrophy from lateral cephalograms.
Collapse
Affiliation(s)
- J L Liu
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - S H Li
- National Key Laboratory of Fundamental Science on Synthetic Vision, College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Y M Cai
- Department of Dental Technology, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - D P Lan
- Department of Dental Technology, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Y F Lu
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - W Liao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - S C Ying
- College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Z H Zhao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
80
|
Abdullah MAM, Alkassar S, Jebur B, Chambers J. LBTS-Net: A fast and accurate CNN model for brain tumour segmentation. Healthc Technol Lett 2021; 8:31-36. [PMID: 33850627 PMCID: PMC8024025 DOI: 10.1049/htl2.12005] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2020] [Revised: 01/29/2021] [Accepted: 02/05/2021] [Indexed: 12/21/2022] Open
Abstract
An accurate tumour segmentation in brain images is a complicated task due to the complext structure and irregular shape of the tumour. In this letter, our contribution is twofold: (1) a lightweight brain tumour segmentation network (LBTS-Net) is proposed for a fast yet accurate brain tumour segmentation; (2) transfer learning is integrated within the LBTS-Net to fine-tune the network and achieve a robust tumour segmentation. To the best of knowledge, this work is amongst the first in the literature which proposes a lightweight and tailored convolution neural network for brain tumour segmentation. The proposed model is based on the VGG architecture in which the number of convolution filters is cut to half in the first layer and the depth-wise convolution is employed to lighten the VGG-16 and VGG-19 networks. Also, the original pixel-labels in the LBTS-Net are replaced by the new tumour labels in order to form the classification layer. Experimental results on the BRATS2015 database and comparisons with the state-of-the-art methods confirmed the robustness of the proposed method achieving a global accuracy and a Dice score of 98.11% and 91%, respectively, while being much more computationally efficient due to containing almost half the number of parameters as in the standard VGG network.
Collapse
Affiliation(s)
| | - Sinan Alkassar
- Computer and Information Engineering DepartmentNinevah UniversityMosulIraq
| | - Bilal Jebur
- Computer and Information Engineering DepartmentNinevah UniversityMosulIraq
| | | |
Collapse
|
81
|
Li LR, Du B, Liu HQ, Chen C. Artificial Intelligence for Personalized Medicine in Thyroid Cancer: Current Status and Future Perspectives. Front Oncol 2021; 10:604051. [PMID: 33634025 PMCID: PMC7899964 DOI: 10.3389/fonc.2020.604051] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 12/21/2020] [Indexed: 12/12/2022] Open
Abstract
Thyroid cancers (TC) have increasingly been detected following advances in diagnostic methods. Risk stratification guided by refined information becomes a crucial step toward the goal of personalized medicine. The diagnosis of TC mainly relies on imaging analysis, but visual examination may not reveal much information and not enable comprehensive analysis. Artificial intelligence (AI) is a technology used to extract and quantify key image information by simulating complex human functions. This latent, precise information contributes to stratify TC on the distinct risk and drives tailored management to transit from the surface (population-based) to a point (individual-based). In this review, we started with several challenges regarding personalized care in TC, for example, inconsistent rating ability of ultrasound physicians, uncertainty in cytopathological diagnosis, difficulty in discriminating follicular neoplasms, and inaccurate prognostication. We then analyzed and summarized the advances of AI to extract and analyze morphological, textural, and molecular features to reveal the ground truth of TC. Consequently, their combination with AI technology will make individual medical strategies possible.
Collapse
Affiliation(s)
- Ling-Rui Li
- Department of Breast and Thyroid Surgery, Renmin Hospital of Wuhan University, Wuhan, China
| | - Bo Du
- School of Computer Science, Wuhan University, Wuhan, China.,Institute of Artificial Intelligence, Wuhan University, Wuhan, China
| | - Han-Qing Liu
- Department of Breast and Thyroid Surgery, Renmin Hospital of Wuhan University, Wuhan, China
| | - Chuang Chen
- Department of Breast and Thyroid Surgery, Renmin Hospital of Wuhan University, Wuhan, China
| |
Collapse
|
82
|
Huang Z, Zhou Q, Zhu X, Zhang X. Batch Similarity Based Triplet Loss Assembled into Light-Weighted Convolutional Neural Networks for Medical Image Classification. SENSORS 2021; 21:s21030764. [PMID: 33498800 PMCID: PMC7865867 DOI: 10.3390/s21030764] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 01/19/2021] [Accepted: 01/21/2021] [Indexed: 01/16/2023]
Abstract
In many medical image classification tasks, there is insufficient image data for deep convolutional neural networks (CNNs) to overcome the over-fitting problem. The light-weighted CNNs are easy to train but they usually have relatively poor classification performance. To improve the classification ability of light-weighted CNN models, we have proposed a novel batch similarity-based triplet loss to guide the CNNs to learn the weights. The proposed loss utilizes the similarity among multiple samples in the input batches to evaluate the distribution of training data. Reducing the proposed loss can increase the similarity among images of the same category and reduce the similarity among images of different categories. Besides this, it can be easily assembled into regular CNNs. To appreciate the performance of the proposed loss, some experiments have been done on chest X-ray images and skin rash images to compare it with several losses based on such popular light-weighted CNN models as EfficientNet, MobileNet, ShuffleNet and PeleeNet. The results demonstrate the applicability and effectiveness of our method in terms of classification accuracy, sensitivity and specificity.
Collapse
|
83
|
A Hybrid Swarm and Gravitation-based feature selection algorithm for handwritten Indic script classification problem. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-020-00237-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
AbstractIn any multi-script environment, handwritten script classification is an unavoidable pre-requisite before the document images are fed to their respective Optical Character Recognition (OCR) engines. Over the years, this complex pattern classification problem has been solved by researchers proposing various feature vectors mostly having large dimensions, thereby increasing the computation complexity of the whole classification model. Feature Selection (FS) can serve as an intermediate step to reduce the size of the feature vectors by restricting them only to the essential and relevant features. In the present work, we have addressed this issue by introducing a new FS algorithm, called Hybrid Swarm and Gravitation-based FS (HSGFS). This algorithm has been applied over three feature vectors introduced in the literature recently—Distance-Hough Transform (DHT), Histogram of Oriented Gradients (HOG), and Modified log-Gabor (MLG) filter Transform. Three state-of-the-art classifiers, namely, Multi-Layer Perceptron (MLP), K-Nearest Neighbour (KNN), and Support Vector Machine (SVM), are used to evaluate the optimal subset of features generated by the proposed FS model. Handwritten datasets at block, text line, and word level, consisting of officially recognized 12 Indic scripts, are prepared for experimentation. An average improvement in the range of 2–5% is achieved in the classification accuracy by utilizing only about 75–80% of the original feature vectors on all three datasets. The proposed method also shows better performance when compared to some popularly used FS models. The codes used for implementing HSGFS can be found in the following Github link: https://github.com/Ritam-Guha/HSGFS.
Collapse
|
84
|
Dey P. The emerging role of deep learning in cytology. Cytopathology 2020; 32:154-160. [PMID: 33222315 DOI: 10.1111/cyt.12942] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 11/11/2020] [Accepted: 11/16/2020] [Indexed: 12/14/2022]
Abstract
Deep learning (DL) is a component or subset of artificial intelligence. DL has contributed significant change in feature extraction and image classification. Various algorithmic models are used in DL such as a convolutional neural network (CNN), recurrent neural network, restricted Boltzmann machine, deep belief network and autoencoders. Of these, CNN is the most commonly used algorithm in the field of pathology for feature extraction and building neural network models. DL may be useful for tumour diagnosis, classification of the tumour and grading of the tumour in cytology. In this brief review, the basic concept of the DL and CNN are described. The application, prospects and challenges of the DL in the cytology are also discussed.
Collapse
Affiliation(s)
- Pranab Dey
- Department of Cytology and Gynec Pathology, Post Graduate Institute of Medical Education and Research, Chandigarh, India
| |
Collapse
|
85
|
Kezlarian B, Lin O. Artificial Intelligence in Thyroid Fine Needle Aspiration Biopsies. Acta Cytol 2020; 65:324-329. [PMID: 33326953 DOI: 10.1159/000512097] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 10/06/2020] [Indexed: 11/19/2022]
Abstract
BACKGROUND From cell phones to aerospace, artificial intelligence (AI) has wide-reaching influence in the modern age. In this review, we discuss the application of AI solutions to an equally ubiquitous problem in cytopathology - thyroid fine needle aspiration biopsy (FNAB). Thyroid nodules are common in the general population, and FNAB is the sampling modality of choice. The resulting prevalence in the practicing pathologist's daily workload makes thyroid FNAB an appealing target for the application of AI solutions. SUMMARY This review summarizes all available literature on the application of AI to thyroid cytopathology. We follow the evolution from morphometric analysis to convolutional neural networks. We explore the application of AI technology to different questions in thyroid cytopathology, including distinguishing papillary carcinoma from benign, distinguishing follicular adenoma from carcinoma and identifying non-invasive follicular thyroid neoplasm with papillary-like nuclear features by key words and phrases. Key Messages: The current literature shows promise towards the application of AI technology to thyroid fine needle aspiration biopsy. Much work is needed to define how this powerful technology will be of best use to the future of cytopathology practice.
Collapse
Affiliation(s)
- Brie Kezlarian
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Oscar Lin
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, New York, USA,
| |
Collapse
|
86
|
Azghadi MR, Lammie C, Eshraghian JK, Payvand M, Donati E, Linares-Barranco B, Indiveri G. Hardware Implementation of Deep Network Accelerators Towards Healthcare and Biomedical Applications. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2020; 14:1138-1159. [PMID: 33156792 DOI: 10.1109/tbcas.2020.3036081] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The advent of dedicated Deep Learning (DL) accelerators and neuromorphic processors has brought on new opportunities for applying both Deep and Spiking Neural Network (SNN) algorithms to healthcare and biomedical applications at the edge. This can facilitate the advancement of medical Internet of Things (IoT) systems and Point of Care (PoC) devices. In this paper, we provide a tutorial describing how various technologies including emerging memristive devices, Field Programmable Gate Arrays (FPGAs), and Complementary Metal Oxide Semiconductor (CMOS) can be used to develop efficient DL accelerators to solve a wide variety of diagnostic, pattern recognition, and signal processing problems in healthcare. Furthermore, we explore how spiking neuromorphic processors can complement their DL counterparts for processing biomedical signals. The tutorial is augmented with case studies of the vast literature on neural network and neuromorphic hardware as applied to the healthcare domain. We benchmark various hardware platforms by performing a sensor fusion signal processing task combining electromyography (EMG) signals with computer vision. Comparisons are made between dedicated neuromorphic processors and embedded AI accelerators in terms of inference latency and energy. Finally, we provide our analysis of the field and share a perspective on the advantages, disadvantages, challenges, and opportunities that various accelerators and neuromorphic processors introduce to healthcare and biomedical domains.
Collapse
|
87
|
Do LN, Baek BH, Kim SK, Yang HJ, Park I, Yoon W. Automatic Assessment of ASPECTS Using Diffusion-Weighted Imaging in Acute Ischemic Stroke Using Recurrent Residual Convolutional Neural Network. Diagnostics (Basel) 2020; 10:diagnostics10100803. [PMID: 33050251 PMCID: PMC7601116 DOI: 10.3390/diagnostics10100803] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Revised: 09/29/2020] [Accepted: 10/07/2020] [Indexed: 01/01/2023] Open
Abstract
The early detection and rapid quantification of acute ischemic lesions play pivotal roles in stroke management. We developed a deep learning algorithm for the automatic binary classification of the Alberta Stroke Program Early Computed Tomographic Score (ASPECTS) using diffusion-weighted imaging (DWI) in acute stroke patients. Three hundred and ninety DWI datasets with acute anterior circulation stroke were included. A classifier algorithm utilizing a recurrent residual convolutional neural network (RRCNN) was developed for classification between low (1–6) and high (7–10) DWI-ASPECTS groups. The model performance was compared with a pre-trained VGG16, Inception V3, and a 3D convolutional neural network (3DCNN). The proposed RRCNN model demonstrated higher performance than the pre-trained models and 3DCNN with an accuracy of 87.3%, AUC of 0.941, and F1-score of 0.888 for classification between the low and high DWI-ASPECTS groups. These results suggest that the deep learning algorithm developed in this study can provide a rapid assessment of DWI-ASPECTS and may serve as an ancillary tool that can assist physicians in making urgent clinical decisions.
Collapse
Affiliation(s)
- Luu-Ngoc Do
- Department of Radiology, Chonnam National University, Gwangju 61469, Korea; (L.-N.D.); (B.H.B.); (S.K.K.); (W.Y.)
| | - Byung Hyun Baek
- Department of Radiology, Chonnam National University, Gwangju 61469, Korea; (L.-N.D.); (B.H.B.); (S.K.K.); (W.Y.)
- Department of Radiology, Chonnam National University Hospital, Gwangju 61469, Korea
| | - Seul Kee Kim
- Department of Radiology, Chonnam National University, Gwangju 61469, Korea; (L.-N.D.); (B.H.B.); (S.K.K.); (W.Y.)
- Department of Radiology, Chonnam National University Hwasun Hospital, Hwasun 58128, Korea
| | - Hyung-Jeong Yang
- Department of Electronics and Computer Engineering, Chonnam National University, Gwangju 61186, Korea
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju 61186, Korea
- Correspondence: (I.P.); (H.-J.Y.); Tel.: +82-62-220-5744 (I.P.); +82-62-530-3436 (H.-J.Y.)
| | - Ilwoo Park
- Department of Radiology, Chonnam National University, Gwangju 61469, Korea; (L.-N.D.); (B.H.B.); (S.K.K.); (W.Y.)
- Department of Radiology, Chonnam National University Hospital, Gwangju 61469, Korea
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju 61186, Korea
- Correspondence: (I.P.); (H.-J.Y.); Tel.: +82-62-220-5744 (I.P.); +82-62-530-3436 (H.-J.Y.)
| | - Woong Yoon
- Department of Radiology, Chonnam National University, Gwangju 61469, Korea; (L.-N.D.); (B.H.B.); (S.K.K.); (W.Y.)
- Department of Radiology, Chonnam National University Hospital, Gwangju 61469, Korea
| |
Collapse
|
88
|
Thomas J, Ledger GA, Mamillapalli CK. Use of artificial intelligence and machine learning for estimating malignancy risk of thyroid nodules. Curr Opin Endocrinol Diabetes Obes 2020; 27:345-350. [PMID: 32740044 DOI: 10.1097/med.0000000000000557] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE OF REVIEW Current methods for thyroid nodule risk stratification are subjective, and artificial intelligence algorithms have been used to overcome this shortcoming. In this review, we summarize recent developments in the application of artificial intelligence algorithms for estimating the risks of malignancy in a thyroid nodule. RECENT FINDINGS Artificial intelligence have been used to predict malignancy in thyroid nodules using ultrasound images, cytopathology images, and molecular markers. Recent clinical trials have shown that artificial intelligence model's performance matched that of experienced radiologists and pathologists. Explainable artificial intelligence models are being developed to avoid the black box problem. Risk stratification algorithms using artificial intelligence for thyroid nodules are now commercially available in many countries. SUMMARY Artificial intelligence models could become a useful tool in a thyroidolgist's armamentarium as a decision support tool. Increased adoption of this emerging technology will depend upon increased awareness of the potential benefits and pitfalls in using artificial intelligence.
Collapse
Affiliation(s)
- Johnson Thomas
- Department of Endocrinology, Mercy Hospital, Springfield, Missouri
| | - Gregory A Ledger
- Department of Endocrinology, Mercy Hospital, Springfield, Missouri
| | | |
Collapse
|
89
|
Steinbuss G, Kriegsmann K, Kriegsmann M. Identification of Gastritis Subtypes by Convolutional Neuronal Networks on Histological Images of Antrum and Corpus Biopsies. Int J Mol Sci 2020; 21:ijms21186652. [PMID: 32932860 PMCID: PMC7555568 DOI: 10.3390/ijms21186652] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 09/07/2020] [Accepted: 09/08/2020] [Indexed: 12/02/2022] Open
Abstract
Background: Gastritis is a prevalent disease and commonly classified into autoimmune (A), bacterial (B), and chemical (C) type gastritis. While the former two subtypes are associated with an increased risk of developing gastric intestinal adenocarcinoma, the latter subtype is not. In this study, we evaluated the capability to classify common gastritis subtypes using convolutional neuronal networks on a small dataset of antrum and corpus biopsies. Methods: 1230 representative 500 × 500 µm images of 135 patients with type A, type B, and type C gastritis were extracted from scanned histological slides. Patients were allocated randomly into a training set (60%), a validation set (20%), and a test set (20%). One classifier for antrum and one classifier for corpus were trained and optimized. After optimization, the test set was analyzed using a joint result from both classifiers. Results: Overall accuracy in the test set was 84% and was particularly high for type B gastritis with a sensitivity of 100% and a specificity of 93%. Conclusions: Classification of gastritis subtypes is possible using convolutional neural networks on a small dataset of histopathological images of antrum and corpus biopsies. Deep learning strategies to support routine diagnostic pathology merit further evaluation.
Collapse
Affiliation(s)
- Georg Steinbuss
- Department of Hematology, Oncology and Rheumatology, University Hospital Heidelberg, 69120 Heidelberg, Germany; (G.S.); (K.K.)
- Institute of Pathology, University Hospital Heidelberg, 69120 Heidelberg, Germany
| | - Katharina Kriegsmann
- Department of Hematology, Oncology and Rheumatology, University Hospital Heidelberg, 69120 Heidelberg, Germany; (G.S.); (K.K.)
| | - Mark Kriegsmann
- Institute of Pathology, University Hospital Heidelberg, 69120 Heidelberg, Germany
- Correspondence: ; Tel.: +49-6221-56-36930
| |
Collapse
|
90
|
Chen JR, Chao YP, Tsai YW, Chan HJ, Wan YL, Tai DI, Tsui PH. Clinical Value of Information Entropy Compared with Deep Learning for Ultrasound Grading of Hepatic Steatosis. ENTROPY (BASEL, SWITZERLAND) 2020; 22:E1006. [PMID: 33286775 PMCID: PMC7597079 DOI: 10.3390/e22091006] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 08/31/2020] [Accepted: 09/07/2020] [Indexed: 02/07/2023]
Abstract
Entropy is a quantitative measure of signal uncertainty and has been widely applied to ultrasound tissue characterization. Ultrasound assessment of hepatic steatosis typically involves a backscattered statistical analysis of signals based on information entropy. Deep learning extracts features for classification without any physical assumptions or considerations in acoustics. In this study, we assessed clinical values of information entropy and deep learning in the grading of hepatic steatosis. A total of 205 participants underwent ultrasound examinations. The image raw data were used for Shannon entropy imaging and for training and testing by the pretrained VGG-16 model, which has been employed for medical data analysis. The entropy imaging and VGG-16 model predictions were compared with histological examinations. The diagnostic performances in grading hepatic steatosis were evaluated using receiver operating characteristic (ROC) curve analysis and the DeLong test. The areas under the ROC curves when using the VGG-16 model to grade mild, moderate, and severe hepatic steatosis were 0.71, 0.75, and 0.88, respectively; those for entropy imaging were 0.68, 0.85, and 0.9, respectively. Ultrasound entropy, which varies with fatty infiltration in the liver, outperformed VGG-16 in identifying participants with moderate or severe hepatic steatosis (p < 0.05). The results indicated that physics-based information entropy for backscattering statistics analysis can be recommended for ultrasound diagnosis of hepatic steatosis, providing not only improved performance in grading but also clinical interpretations of hepatic steatosis.
Collapse
Affiliation(s)
- Jheng-Ru Chen
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan 333323, Taiwan; (J.-R.C.); (Y.-W.T.); (H.-J.C.); (Y.-L.W.)
| | - Yi-Ping Chao
- Department of Computer Science and Information Engineering, College of Engineering, Chang Gung University, Taoyuan 333323, Taiwan;
- Graduate Institute of Biomedical Engineering, Chang Gung University, College of Engineering, Taoyuan 333323, Taiwan
- Department of Neurology, Chang Gung Memorial Hospital at Linkou, Taoyuan 333423, Taiwan
| | - Yu-Wei Tsai
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan 333323, Taiwan; (J.-R.C.); (Y.-W.T.); (H.-J.C.); (Y.-L.W.)
| | - Hsien-Jung Chan
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan 333323, Taiwan; (J.-R.C.); (Y.-W.T.); (H.-J.C.); (Y.-L.W.)
| | - Yung-Liang Wan
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan 333323, Taiwan; (J.-R.C.); (Y.-W.T.); (H.-J.C.); (Y.-L.W.)
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, Taoyuan 333423, Taiwan
- Institute for Radiological Research, Chang Gung University and Chang Gung Memorial Hospital at Linkou, Taoyuan 333423, Taiwan
| | - Dar-In Tai
- Department of Gastroenterology and Hepatology, Chang Gung Memorial Hospital at Linkou, Chang Gung University, Taoyuan 333423, Taiwan
| | - Po-Hsiang Tsui
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan 333323, Taiwan; (J.-R.C.); (Y.-W.T.); (H.-J.C.); (Y.-L.W.)
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, Taoyuan 333423, Taiwan
- Institute for Radiological Research, Chang Gung University and Chang Gung Memorial Hospital at Linkou, Taoyuan 333423, Taiwan
| |
Collapse
|
91
|
Qin YY, Huang SN, Chen G, Pang YY, Li XJ, Xing WW, Wei DM, He Y, Rong MH, Tang XZ. Clinicopathological value and underlying molecular mechanism of annexin A2 in 992 cases of thyroid carcinoma. Comput Biol Chem 2020; 86:107258. [PMID: 32304977 DOI: 10.1016/j.compbiolchem.2020.107258] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 12/30/2019] [Accepted: 03/23/2020] [Indexed: 12/15/2022]
Abstract
BACKGROUND Thyroid carcinoma (THCA) is one of the most frequent endocrine cancers and has increasing morbidity. Annexin A2 (ANXA2) has been found to be highly expressed in various cancers; however, its expression level and potential mechanism in THCA remain unknown. This study investigated the clinicopathological value and primary molecular machinery of ANXA2 in THCA. MATERIAL AND METHODS Public RNA-sequencing and microarray data were obtained and analyzed with ANXA2 expression in THCA and corresponding non-cancerous thyroid tissue. A Pearson correlation coefficient calculation was used for the acquisition of ANXA2 coexpressed genes, while edgR, limma, and Robust Rank Aggregation were employed for differentially expressed gene (DEG) in THCA. The probable mechanism of ANXA2 in THCA was predicted by gene ontology and pathway enrichment. A dual-luciferase reporter assay was employed to confirm the targeting relationships between ANXA2 and its predicted microRNA (miRNA). RESULTS Expression of ANXA2 was significantly upregulated in THCA tissues with a summarized standardized mean difference of 1.09 (P < 0.0001) based on 992 THCA cases and 589 cases of normal thyroid tissue. Expression of ANXA2 was related to pathologic stage. Subsequently, 1442 genes were obtained when overlapping 4542 ANXA2 coexpressed genes with 2248 DEGs in THCA; these genes were mostly enriched in pathways of extracellular matrix-receptor interaction, cell adhesion molecules, and complement and coagulation cascades. MiR-23b-3p was confirmed to target ANXA2 by dual-luciferase reporter assay. CONCLUSIONS Upregulated expression of ANXA2 may promote the malignant biological behavior of THCA by affecting the involving pathways or being targeted by miR-23b-3p.
Collapse
Affiliation(s)
- Yong-Ying Qin
- Department of Head and Neck Tumor Surgery, Guangxi Medical University Cancer Hospital, 71 Hedi Road, Nanning, Guangxi Zhuang Autonomous Region, PR China
| | - Su-Ning Huang
- Department of Radiotherapy, Guangxi Medical University Cancer Hospital, 71 Hedi Road, Nanning, Guangxi Zhuang Autonomous Region, PR China
| | - Gang Chen
- Department of Pathology, The First Affiliated Hospital of Guangxi Medical University, 6 Shuangyong Road, Nanning, Guangxi Zhuang Autonomous Region, PR China
| | - Yu-Yan Pang
- Department of Pathology, The First Affiliated Hospital of Guangxi Medical University, 6 Shuangyong Road, Nanning, Guangxi Zhuang Autonomous Region, PR China
| | - Xiao-Jiao Li
- Department of PET/CT, The First Affiliated Hospital of Guangxi Medical University, 6 Shuangyong Road, Nanning, Guangxi Zhuang Autonomous Region, PR China
| | - Wen-Wen Xing
- Department of Pathology, The First Affiliated Hospital of Guangxi Medical University, 6 Shuangyong Road, Nanning, Guangxi Zhuang Autonomous Region, PR China
| | - Dan-Ming Wei
- Department of Pathology, The First Affiliated Hospital of Guangxi Medical University, 6 Shuangyong Road, Nanning, Guangxi Zhuang Autonomous Region, PR China
| | - Yun He
- Department of Ultrasound, The First Affiliated Hospital of Guangxi Medical University, 6 Shuangyong Road, Nanning, Guangxi Zhuang Autonomous Region, PR China
| | - Min-Hua Rong
- Department of Research, Guangxi Medical University Cancer Hospital, 71 Hedi Road, Nanning, Guangxi Zhuang Autonomous Region, PR China.
| | - Xiao-Zhun Tang
- Department of Head and Neck Tumor Surgery, Guangxi Medical University Cancer Hospital, 71 Hedi Road, Nanning, Guangxi Zhuang Autonomous Region, PR China.
| |
Collapse
|
92
|
Jiang Y, Yang M, Wang S, Li X, Sun Y. Emerging role of deep learning-based artificial intelligence in tumor pathology. Cancer Commun (Lond) 2020; 40:154-166. [PMID: 32277744 PMCID: PMC7170661 DOI: 10.1002/cac2.12012] [Citation(s) in RCA: 221] [Impact Index Per Article: 44.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 02/06/2020] [Indexed: 12/11/2022] Open
Abstract
The development of digital pathology and progression of state-of-the-art algorithms for computer vision have led to increasing interest in the use of artificial intelligence (AI), especially deep learning (DL)-based AI, in tumor pathology. The DL-based algorithms have been developed to conduct all kinds of work involved in tumor pathology, including tumor diagnosis, subtyping, grading, staging, and prognostic prediction, as well as the identification of pathological features, biomarkers and genetic changes. The applications of AI in pathology not only contribute to improve diagnostic accuracy and objectivity but also reduce the workload of pathologists and subsequently enable them to spend additional time on high-level decision-making tasks. In addition, AI is useful for pathologists to meet the requirements of precision oncology. However, there are still some challenges relating to the implementation of AI, including the issues of algorithm validation and interpretability, computing systems, the unbelieving attitude of pathologists, clinicians and patients, as well as regulators and reimbursements. Herein, we present an overview on how AI-based approaches could be integrated into the workflow of pathologists and discuss the challenges and perspectives of the implementation of AI in tumor pathology.
Collapse
Affiliation(s)
- Yahui Jiang
- Department of PathologyKey Laboratory of Cancer Prevention and TherapyTianjin's Clinical Research Center for CancerNational Clinical Research Center for CancerTianjin Cancer Institute and HospitalTianjin Medical UniversityTianjin300060P. R. China
| | - Meng Yang
- Department Epidemiology and BiostatisticsKey Laboratory of Cancer Prevention and TherapyTianjin's Clinical Research Center for CancerNational Clinical Research Center for CancerTianjin Cancer Institute and HospitalTianjin Medical UniversityTianjin300060P.R. China
| | - Shuhao Wang
- Institute for Interdisciplinary Information SciencesTsinghua UniversityBeijing100084P. R. China
| | - Xiangchun Li
- Department Epidemiology and BiostatisticsKey Laboratory of Cancer Prevention and TherapyTianjin's Clinical Research Center for CancerNational Clinical Research Center for CancerTianjin Cancer Institute and HospitalTianjin Medical UniversityTianjin300060P.R. China
| | - Yan Sun
- Department of PathologyKey Laboratory of Cancer Prevention and TherapyTianjin's Clinical Research Center for CancerNational Clinical Research Center for CancerTianjin Cancer Institute and HospitalTianjin Medical UniversityTianjin300060P. R. China
| |
Collapse
|