1
|
Ho QH, Nguyen TNQ, Tran TT, Pham VT. LiteMamba-Bound: A lightweight Mamba-based model with boundary-aware and normalized active contour loss for skin lesion segmentation. Methods 2025; 235:10-25. [PMID: 39864606 DOI: 10.1016/j.ymeth.2025.01.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2024] [Revised: 01/05/2025] [Accepted: 01/13/2025] [Indexed: 01/28/2025] Open
Abstract
In the field of medical science, skin segmentation has gained significant importance, particularly in dermatology and skin cancer research. This domain demands high precision in distinguishing critical regions (such as lesions or moles) from healthy skin in medical images. With growing technological advancements, deep learning models have emerged as indispensable tools in addressing these challenges. One of the state-of-the-art modules revealed in recent years, the 2D Selective Scan (SS2D), based on state-space models that have already seen great success in natural language processing, has been increasingly adopted and is gradually replacing Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). Leveraging the strength of this module, this paper introduces LiteMamba-Bound, a lightweight model with approximately 957K parameters, designed for skin image segmentation tasks. Notably, the Channel Attention Dual Mamba (CAD-Mamba) block is proposed within both the encoder and decoder alongside the Mix Convolution with Simple Attention bottleneck block to emphasize key features. Additionally, we propose the Reverse Attention Boundary Module to highlight challenging boundary features. Also, the Normalized Active Contour loss function presented in this paper significantly improves the model's performance compared to other loss functions. To validate performance, we conducted tests on two skin image datasets, ISIC2018 and PH2, with results consistently showing superior performance compared to other models. Our code will be made publicly available at: https://github.com/kwanghwi242/A-new-segmentation-model.
Collapse
Affiliation(s)
- Quang-Huy Ho
- School of Electrical and Electronic Engineering, Hanoi University of Science and Technology, Hanoi, Viet Nam
| | - Thi-Nhu-Quynh Nguyen
- School of Electrical and Electronic Engineering, Hanoi University of Science and Technology, Hanoi, Viet Nam
| | - Thi-Thao Tran
- School of Electrical and Electronic Engineering, Hanoi University of Science and Technology, Hanoi, Viet Nam
| | - Van-Truong Pham
- School of Electrical and Electronic Engineering, Hanoi University of Science and Technology, Hanoi, Viet Nam.
| |
Collapse
|
2
|
Byun YH, Son J, Yun J, Choo H, Won J. Machine learning-based pattern recognition of Bender element signals for predicting sand particle-size. Sci Rep 2025; 15:6949. [PMID: 40011671 DOI: 10.1038/s41598-025-91497-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2024] [Accepted: 02/20/2025] [Indexed: 02/28/2025] Open
Abstract
This study explores the potential of integrating bender element signals with a convolutional neural network (CNN) to predict the particle size distribution of relatively uniform sand. A one-dimensional CNN analyzed time-series signals from bender elements across four sand types with particle sizes ranging from 0.5 to approximately 7 mm, under vertical stresses of 10, 50, and 150 kPa in three different cutoff frequencies (10, 50, and 100 kHz). The CNN architecture included convolutional layers augmented with batch normalization and ReLU activation functions, optimized through Bayesian techniques to enhance prediction accuracy. Experimental results demonstrated that higher stresses increased resonant frequencies and reduced arrival times of shear waves, with minor dependencies on soil type. Nevertheless, the developed CNN model well classified the four sand types at a given vertical stress and cutoff frequency, implying that the unique pattern of each sand type can be satisfactorily captured by the CNN algorithm. Overall, the framework shown in this study demonstrates that the bender element (or pattern of receiving shear wave signals) with the CNN model can be used in monitoring real-time variation of sand particle size.
Collapse
Affiliation(s)
- Yong-Hoon Byun
- Department of Agricultural Civil Engineering, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu, 41566, Republic of Korea
| | - Juik Son
- Department of Agricultural Civil Engineering, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu, 41566, Republic of Korea
| | - Jungmin Yun
- Department of Civil and Environmental Engineering, University of Ulsan, Daehak-ro 93, Nam-gu, Ulsan, 680-749, Republic of Korea
| | - Hyunwook Choo
- Department of Civil and Environmental Engineering, Hanyang University, Seoul, 04763, Republic of Korea
| | - Jongmuk Won
- Department of Civil, Earth, and Environmental Engineering, Ulsan National Institute of Science and Technology (UNIST), UNIST-gil 50, Ulju-gun, Ulsan, 44919, Republic of Korea.
| |
Collapse
|
3
|
Kanchan M, Tambe PK, Bharati S, Powar OS. Convolutional neural network for colorimetric glucose detection using a smartphone and novel multilayer polyvinyl film microfluidic device. Sci Rep 2024; 14:28377. [PMID: 39551869 PMCID: PMC11570695 DOI: 10.1038/s41598-024-79581-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Accepted: 11/11/2024] [Indexed: 11/19/2024] Open
Abstract
Detecting glucose levels is crucial for diabetes patients as it enables timely and effective management, preventing complications and promoting overall health. In this endeavor, we have designed a novel, affordable point-of-care diagnostic device utilizing microfluidic principles, a smartphone camera, and established laboratory colorimetric methods for accurate glucose estimation. Our proposed microfluidic device comprises layers of adhesive poly-vinyl films stacked on a poly methyl methacrylate (PMMA) base sheet, with micro-channel contours precision-cut using a cutting printer. Employing the gold standard glucose-oxidase/peroxidase reaction on this microfluidic platform, we achieve enzymatic glucose determination. The resulting colored complex, formed by phenol and 4-aminoantipyrine in the presence of hydrogen peroxide generated during glucose oxidation, is captured at various glucose concentrations using a smartphone camera. Raw images are processed and utilized as input data for a 2-D convolutional neural network (CNN) deep learning classifier, demonstrating an impressive 95% overall accuracy against new images. The glucose predictions done by CNN are compared with ISO 15197:2013/2015 gold standard norms. Furthermore, the classifier exhibits outstanding precision, recall, and F1 score of 94%, 93%, and 93%, respectively, as validated through our study, showcasing its exceptional predictive capability. Next, a user-friendly smartphone application named "GLUCOLENS AI" was developed to capture images, perform image processing, and communicate with cloud server containing the CNN classifier. The developed CNN model can be successfully used as a pre-trained model for future glucose concentration predictions.
Collapse
Affiliation(s)
- Mithun Kanchan
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India
| | - Prasad Kisan Tambe
- Department of Nuclear Medicine, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India
| | - Sanjay Bharati
- Department of Nuclear Medicine, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India
| | - Omkar S Powar
- Department of Biomedical Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India.
| |
Collapse
|
4
|
Fatriansyah JF, Linuwih BDP, Andreano Y, Sari IS, Federico A, Anis M, Surip SN, Jaafar M. Prediction of Glass Transition Temperature of Polymers Using Simple Machine Learning. Polymers (Basel) 2024; 16:2464. [PMID: 39274097 PMCID: PMC11398084 DOI: 10.3390/polym16172464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Revised: 08/23/2024] [Accepted: 08/27/2024] [Indexed: 09/16/2024] Open
Abstract
Polymer materials have garnered significant attention due to their exceptional mechanical properties and diverse industrial applications. Understanding the glass transition temperature (Tg) of polymers is critical to prevent operational failures at specific temperatures. Traditional methods for measuring Tg, such as differential scanning calorimetry (DSC) and dynamic mechanical analysis, while accurate, are often time-consuming, costly, and susceptible to inaccuracies due to random and uncertain factors. To address these limitations, the aim of the present study is to investigate the potential of Simplified Molecular Input Line Entry System (SMILES) as descriptors in simple machine learning models to predict Tg efficiently and reliably. Five models were utilized: k-nearest neighbors (KNNs), support vector regression (SVR), extreme gradient boosting (XGBoost), artificial neural network (ANN), and recurrent neural network (RNN). SMILES descriptors were converted into numerical data using either One Hot Encoding (OHE) or Natural Language Processing (NLP). The study found that SMILES inputs with fewer than 200 characters were inadequate for accurately describing compound structures, while inputs exceeding 200 characters diminished model performance due to the curse of dimensionality. The ANN model achieved the highest R2 value of 0.79; however, the XGB model, with an R2 value of 0.774, exhibited the highest stability and shorter training times compared to other models, making it the preferred choice for Tg prediction. The efficiency of the OHE method over NLP was demonstrated by faster training times across the KNN, SVR, XGB, and ANN models. Validation of new polymer data showed the XGB model's robustness, with an average prediction deviation of 9.76 from actual Tg values. These findings underscore the importance of optimizing SMILES conversion methods and model parameters to enhance prediction reliability. Future research should focus on improving model accuracy and generalizability by incorporating additional features and advanced techniques. This study contributes to the development of efficient and reliable predictive models for polymer properties, facilitating the design and application of new polymer materials.
Collapse
Affiliation(s)
- Jaka Fajar Fatriansyah
- Department of Metallurgical and Materials Engineering, Faculty of Engineering, Universitas Indonesia, Kampus UI Depok, Depok 16424, Indonesia
- Advanced Functional Material Research Group, Faculty of Engineering, Universitas Indonesia, Kampus UI Depok, Depok 16424, Indonesia
| | - Baiq Diffa Pakarti Linuwih
- Department of Metallurgical and Materials Engineering, Faculty of Engineering, Universitas Indonesia, Kampus UI Depok, Depok 16424, Indonesia
| | - Yossi Andreano
- Department of Metallurgical and Materials Engineering, Faculty of Engineering, Universitas Indonesia, Kampus UI Depok, Depok 16424, Indonesia
| | - Intan Septia Sari
- Department of Metallurgical and Materials Engineering, Faculty of Engineering, Universitas Indonesia, Kampus UI Depok, Depok 16424, Indonesia
| | - Andreas Federico
- Department of Metallurgical and Materials Engineering, Faculty of Engineering, Universitas Indonesia, Kampus UI Depok, Depok 16424, Indonesia
| | - Muhammad Anis
- Department of Metallurgical and Materials Engineering, Faculty of Engineering, Universitas Indonesia, Kampus UI Depok, Depok 16424, Indonesia
| | - Siti Norasmah Surip
- Faculty of Applied Sciences, Universiti Teknologi MARA, Shah Alam 40450, Malaysia
| | - Mariatti Jaafar
- School of Materials and Mineral Resources Engineering, Universiti Sains Malaysia (USM), Nibong Tebal 14300, Malaysia
| |
Collapse
|
5
|
Yang J, Cai Y, Wang F, Li S, Zhan X, Xu K, He J, Wang Z. A Reconfigurable Bipolar Image Sensor for High-Efficiency Dynamic Vision Recognition. NANO LETTERS 2024; 24:5862-5869. [PMID: 38709809 DOI: 10.1021/acs.nanolett.4c01190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
Dynamic vision perception and processing (DVPP) is in high demand by booming edge artificial intelligence. However, existing imaging systems suffer from low efficiency or low compatibility with advanced machine vision techniques. Here, we propose a reconfigurable bipolar image sensor (RBIS) for in-sensor DVPP based on a two-dimensional WSe2/GeSe heterostructure device. Owing to the gate-tunable and reversible built-in electric field, its photoresponse shows bipolarity as being positive or negative. High-efficiency DVPP incorporating front-end RBIS and back-end CNN is then demonstrated. It shows a high recognition accuracy of over 94.9% on the derived DVS128 data set and requires much fewer neural network parameters than that without RBIS. Moreover, we demonstrate an optimized device with a vertically stacked structure and a stable nonvolatile bipolarity, which enables more efficient DVPP hardware. Our work demonstrates the potential of fabricating DVPP devices with a simple structure, high efficiency, and outputs compatible with advanced algorithms.
Collapse
Affiliation(s)
- Jia Yang
- CAS Key Laboratory of Nanosystem and Hierarchical Fabrication, National Center for Nanoscience and Technology, Beijing 100190, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yuchen Cai
- CAS Key Laboratory of Nanosystem and Hierarchical Fabrication, National Center for Nanoscience and Technology, Beijing 100190, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Feng Wang
- CAS Key Laboratory of Nanosystem and Hierarchical Fabrication, National Center for Nanoscience and Technology, Beijing 100190, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Shuhui Li
- CAS Key Laboratory of Nanosystem and Hierarchical Fabrication, National Center for Nanoscience and Technology, Beijing 100190, China
| | - Xueying Zhan
- CAS Key Laboratory of Nanosystem and Hierarchical Fabrication, National Center for Nanoscience and Technology, Beijing 100190, China
| | - Kai Xu
- Hangzhou Global Scientific and Technological Innovation Center, School of Micro-Nano Electronics, Zhejiang University, Hangzhou 310027, China
| | - Jun He
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
- Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education, School of Physics and Technology, Wuhan University, Wuhan 430072, China
| | - Zhenxing Wang
- CAS Key Laboratory of Nanosystem and Hierarchical Fabrication, National Center for Nanoscience and Technology, Beijing 100190, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
6
|
Jeon K, Ryu JJ, Im S, Seo HK, Eom T, Ju H, Yang MK, Jeong DS, Kim GH. Purely self-rectifying memristor-based passive crossbar array for artificial neural network accelerators. Nat Commun 2024; 15:129. [PMID: 38167379 PMCID: PMC10761713 DOI: 10.1038/s41467-023-44620-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 12/21/2023] [Indexed: 01/05/2024] Open
Abstract
Memristor-integrated passive crossbar arrays (CAs) could potentially accelerate neural network (NN) computations, but studies on these devices are limited to software-based simulations owing to their poor reliability. Herein, we propose a self-rectifying memristor-based 1 kb CA as a hardware accelerator for NN computations. We conducted fully hardware-based single-layer NN classification tasks involving the Modified National Institute of Standards and Technology database using the developed passive CA, and achieved 100% classification accuracy for 1500 test sets. We also investigated the influences of the defect-tolerance capability of the CA, impact of the conductance range of the integrated memristors, and presence or absence of selection functionality in the integrated memristors on the image classification tasks. We offer valuable insights into the behavior and performance of CA devices under various conditions and provide evidence of the practicality of memristor-integrated passive CAs as hardware accelerators for NN applications.
Collapse
Affiliation(s)
- Kanghyeok Jeon
- Division of Materials Science and Engineering, Hanyang University, Seoul, 04763, Republic of Korea
- Division of Advanced Materials, Korea Research Institute of Chemical Technology (KRICT), Daejeon, 34114, Republic of Korea
| | - Jin Joo Ryu
- Division of Advanced Materials, Korea Research Institute of Chemical Technology (KRICT), Daejeon, 34114, Republic of Korea
- Department of Materials Science and Engineering, Yonsei University, Seoul, 03722, Republic of Korea
| | - Seongil Im
- Center for Opto-Electronic Materials and Devices, Korea Institute of Science and Technology (KIST), Seoul, 02792, Republic of Korea
| | - Hyun Kyu Seo
- Intelligent Electronic Device Lab, Sahmyook University, 815 Hwarang-ro, Nowon-Gu, Seoul, 01795, Republic of Korea
| | - Taeyong Eom
- Division of Advanced Materials, Korea Research Institute of Chemical Technology (KRICT), Daejeon, 34114, Republic of Korea
| | - Hyunsu Ju
- Center for Opto-Electronic Materials and Devices, Korea Institute of Science and Technology (KIST), Seoul, 02792, Republic of Korea.
| | - Min Kyu Yang
- Intelligent Electronic Device Lab, Sahmyook University, 815 Hwarang-ro, Nowon-Gu, Seoul, 01795, Republic of Korea.
| | - Doo Seok Jeong
- Division of Materials Science and Engineering, Hanyang University, Seoul, 04763, Republic of Korea.
| | - Gun Hwan Kim
- Department of Materials Science and Engineering, Yonsei University, Seoul, 03722, Republic of Korea.
- Department of System Semiconductor Engineering, Yonsei University, Seoul, 03722, Republic of Korea.
| |
Collapse
|
7
|
Alshamrani K, Alshamrani HA, Alqahtani FF, Alshehri AH, Althaiban SH. Generative and Discriminative Learning for Lung X-Ray Analysis Based on Probabilistic Component Analysis. J Multidiscip Healthc 2023; 16:4039-4051. [PMID: 38116305 PMCID: PMC10728308 DOI: 10.2147/jmdh.s437445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 11/23/2023] [Indexed: 12/21/2023] Open
Abstract
Introduction The paper presents a hybrid generative/discriminative classification method aimed at identifying abnormalities, such as cancer, in lung X-ray images. Methods The proposed method involves a generative model that performs generative embedding in Probabilistic Component Analysis (PrCA). The primary goal of PrCA is to model co-existing information within a probabilistic framework, with the intent to locate the feature vector space for X-ray data based on a defined kernel structure. A kernel-based classifier, grounded in information-theoretic principles, was employed in this study. Results The performance of the proposed method is evaluated against nearest neighbour (NN) classifiers and support vector machine (SVM) classifiers, which use a diagonal covariance matrix and incorporate normal linear and non-linear kernels, respectively. Discussion The method is found to achieve superior accuracy, offering a viable solution to the class of problems presented. Accuracy rates achieved by the kernels in the NN and SVM models were 95.02% and 92.45%, respectively, suggesting the method's competitiveness with state-of-the-art approaches.
Collapse
Affiliation(s)
- Khalaf Alshamrani
- Radiological Science Department, Najran University, Najran, Saudi Arabia
- Oncology and Metabolism Department, Medical School, University of Sheffield, Sheffield, United Kingdom
| | | | - F F Alqahtani
- Radiological Science Department, Najran University, Najran, Saudi Arabia
| | - Ali H Alshehri
- Radiological Science Department, Najran University, Najran, Saudi Arabia
| | | |
Collapse
|
8
|
Liu M, Zhang S, Du Y, Zhang X, Wang D, Ren W, Sun J, Yang S, Zhang G. Identification of Luminal A breast cancer by using deep learning analysis based on multi-modal images. Front Oncol 2023; 13:1243126. [PMID: 38044991 PMCID: PMC10691590 DOI: 10.3389/fonc.2023.1243126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 11/06/2023] [Indexed: 12/05/2023] Open
Abstract
Purpose To evaluate the diagnostic performance of a deep learning model based on multi-modal images in identifying molecular subtype of breast cancer. Materials and methods A total of 158 breast cancer patients (170 lesions, median age, 50.8 ± 11.0 years), including 78 Luminal A subtype and 92 non-Luminal A subtype lesions, were retrospectively analyzed and divided into a training set (n = 100), test set (n = 45), and validation set (n = 25). Mammography (MG) and magnetic resonance imaging (MRI) images were used. Five single-mode models, i.e., MG, T2-weighted imaging (T2WI), diffusion weighting imaging (DWI), axial apparent dispersion coefficient (ADC), and dynamic contrast-enhanced MRI (DCE-MRI), were selected. The deep learning network ResNet50 was used as the basic feature extraction and classification network to construct the molecular subtype identification model. The receiver operating characteristic curve were used to evaluate the prediction efficiency of each model. Results The accuracy, sensitivity and specificity of a multi-modal tool for identifying Luminal A subtype were 0.711, 0.889, and 0.593, respectively, and the area under the curve (AUC) was 0.802 (95% CI, 0.657- 0.906); the accuracy, sensitivity, and AUC were higher than those of any single-modal model, but the specificity was slightly lower than that of DCE-MRI model. The AUC value of MG, T2WI, DWI, ADC, and DCE-MRI model was 0.593 (95%CI, 0.436-0.737), 0.700 (95%CI, 0.545-0.827), 0.564 (95%CI, 0.408-0.711), 0.679 (95%CI, 0.523-0.810), and 0.553 (95%CI, 0.398-0.702), respectively. Conclusion The combination of deep learning and multi-modal imaging is of great significance for diagnosing breast cancer subtypes and selecting personalized treatment plans for doctors.
Collapse
Affiliation(s)
- Menghan Liu
- Department of Health Management, The First Affiliated Hospital of Shandong First Medical University & Shandong Engineering Laboratory for Health Management, Shandong Medicine and Health Key Laboratory of Laboratory Medicine, Shandong Provincial Qianfoshan Hospital, Jinan, China
| | - Shuai Zhang
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
- Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
| | - Yanan Du
- Department of Health Management, The First Affiliated Hospital of Shandong First Medical University & Shandong Engineering Laboratory for Health Management, Shandong Medicine and Health Key Laboratory of Laboratory Medicine, Shandong Provincial Qianfoshan Hospital, Jinan, China
| | - Xiaodong Zhang
- Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
| | - Dawei Wang
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University, Jinan, China
| | - Wanqing Ren
- Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
| | - Jingxiang Sun
- Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
| | - Shiwei Yang
- Department of Anorectal Surgery, The First Affiliated Hospital of Shandong First Medical University, Jinan, China
| | - Guang Zhang
- Department of Health Management, The First Affiliated Hospital of Shandong First Medical University & Shandong Engineering Laboratory for Health Management, Shandong Medicine and Health Key Laboratory of Laboratory Medicine, Shandong Provincial Qianfoshan Hospital, Jinan, China
| |
Collapse
|
9
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
10
|
Teague J, Socia D, An G, Badylak S, Johnson S, Jiang P, Vodovotz Y, Cockrell RC. Artificial Intelligence Optical Biopsy for Evaluating the Functional State of Wounds. J Surg Res 2023; 291:683-690. [PMID: 37562230 DOI: 10.1016/j.jss.2023.07.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 07/05/2023] [Accepted: 07/06/2023] [Indexed: 08/12/2023]
Abstract
INTRODUCTION The clinical characterization of the functional status of active wounds in terms of their driving cellular and molecular biology remains a considerable challenge that currently requires excision via a tissue biopsy. In this pilot study, we use convolutional Siamese neural network (SNN) architecture to predict the functional state of a wound using digital photographs of wounds in a canine model of volumetric muscle loss (VML). METHODS Digital images of VML injuries and tissue biopsies were obtained in a standardized fashion from an established canine model of VML. Gene expression profiles for each biopsy site were obtained using RNA sequencing. These profiles were converted to functional profiles by a manual review of validated gene ontology databases in which we determined a hierarchical representation of gene functions based on functional specificity. An SNN was trained to regress functional profile expression values, informed by an image segment showing the surface of a small tissue biopsy. RESULTS The SNN was able to predict the functional expression of a range of functions based with error ranging from ∼5% to ∼30%, with functions that are most closely associated with the early state of wound healing to be those best-predicted. CONCLUSIONS These initial results suggest promise for further research regarding this novel use of machine learning regression on medical images. The regression of functional profiles, as opposed to specific genes, both addresses the challenge of genetic redundancy and gives a deeper insight into the mechanistic configuration of a region of tissue in wounds.
Collapse
Affiliation(s)
- Joe Teague
- Department of Surgery, University of Vermont, Burlington, Vermont
| | - Damien Socia
- Department of Surgery, University of Vermont, Burlington, Vermont
| | - Gary An
- Department of Surgery, University of Vermont, Burlington, Vermont
| | - Stephen Badylak
- McGowan Institute of Regenerative Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Scott Johnson
- McGowan Institute of Regenerative Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Peng Jiang
- Center for Gene Regulation in Health and Disease (GRHD), Cleveland State University, Cleveland, Ohio
| | - Yoram Vodovotz
- McGowan Institute of Regenerative Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania; Department of Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - R Chase Cockrell
- Department of Surgery, University of Vermont, Burlington, Vermont.
| |
Collapse
|
11
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
12
|
Iqbal S, N. Qureshi A, Li J, Mahmood T. On the Analyses of Medical Images Using Traditional Machine Learning Techniques and Convolutional Neural Networks. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:3173-3233. [PMID: 37260910 PMCID: PMC10071480 DOI: 10.1007/s11831-023-09899-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/19/2023] [Indexed: 06/02/2023]
Abstract
Convolutional neural network (CNN) has shown dissuasive accomplishment on different areas especially Object Detection, Segmentation, Reconstruction (2D and 3D), Information Retrieval, Medical Image Registration, Multi-lingual translation, Local language Processing, Anomaly Detection on video and Speech Recognition. CNN is a special type of Neural Network, which has compelling and effective learning ability to learn features at several steps during augmentation of the data. Recently, different interesting and inspiring ideas of Deep Learning (DL) such as different activation functions, hyperparameter optimization, regularization, momentum and loss functions has improved the performance, operation and execution of CNN Different internal architecture innovation of CNN and different representational style of CNN has significantly improved the performance. This survey focuses on internal taxonomy of deep learning, different models of vonvolutional neural network, especially depth and width of models and in addition CNN components, applications and current challenges of deep learning.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Tariq Mahmood
- Artificial Intelligence and Data Analytics (AIDA) Lab, College of Computer & Information Sciences (CCIS), Prince Sultan University, Riyadh, 11586 Kingdom of Saudi Arabia
| |
Collapse
|
13
|
Li J, Chen J, Tang Y, Wang C, Landman BA, Zhou SK. Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives. Med Image Anal 2023; 85:102762. [PMID: 36738650 PMCID: PMC10010286 DOI: 10.1016/j.media.2023.102762] [Citation(s) in RCA: 66] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 01/18/2023] [Accepted: 01/27/2023] [Indexed: 02/01/2023]
Abstract
Transformer, one of the latest technological advances of deep learning, has gained prevalence in natural language processing or computer vision. Since medical imaging bear some resemblance to computer vision, it is natural to inquire about the status quo of Transformers in medical imaging and ask the question: can the Transformer models transform medical imaging? In this paper, we attempt to make a response to the inquiry. After a brief introduction of the fundamentals of Transformers, especially in comparison with convolutional neural networks (CNNs), and highlighting key defining properties that characterize the Transformers, we offer a comprehensive review of the state-of-the-art Transformer-based approaches for medical imaging and exhibit current research progresses made in the areas of medical image segmentation, recognition, detection, registration, reconstruction, enhancement, etc. In particular, what distinguishes our review lies in its organization based on the Transformer's key defining properties, which are mostly derived from comparing the Transformer and CNN, and its type of architecture, which specifies the manner in which the Transformer and CNN are combined, all helping the readers to best understand the rationale behind the reviewed approaches. We conclude with discussions of future perspectives.
Collapse
Affiliation(s)
- Jun Li
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Junyu Chen
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, USA
| | - Yucheng Tang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Ce Wang
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Bennett A Landman
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - S Kevin Zhou
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China; School of Biomedical Engineering & Suzhou Institute for Advanced Research, Center for Medical Imaging, Robotics, and Analytic Computing & Learning (MIRACLE), University of Science and Technology of China, Suzhou 215123, China.
| |
Collapse
|
14
|
Jakkaladiki SP, Maly F. An efficient transfer learning based cross model classification (TLBCM) technique for the prediction of breast cancer. PeerJ Comput Sci 2023; 9:e1281. [PMID: 37346575 PMCID: PMC10280457 DOI: 10.7717/peerj-cs.1281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 02/16/2023] [Indexed: 06/23/2023]
Abstract
Breast cancer has been the most life-threatening disease in women in the last few decades. The high mortality rate among women is due to breast cancer because of less awareness and a minimum number of medical facilities to detect the disease in the early stages. In the recent era, the situation has changed with the help of many technological advancements and medical equipment to observe breast cancer development. The machine learning technique supports vector machines (SVM), logistic regression, and random forests have been used to analyze the images of cancer cells on different data sets. Although the particular technique has performed better on the smaller data set, accuracy still needs to catch up in most of the data, which needs to be fairer to apply in the real-time medical environment. In the proposed research, state-of-the-art deep learning techniques, such as transfer learning, based cross model classification (TLBCM), convolution neural network (CNN) and transfer learning, residual network (ResNet), and Densenet proposed for efficient prediction of breast cancer with the minimized error rating. The convolution neural network and transfer learning are the most prominent techniques for predicting the main features in the data set. The sensitive data is protected using a cyber-physical system (CPS) while using the images virtually over the network. CPS act as a virtual connection between human and networks. While the data is transferred in the network, it must monitor using CPS. The ResNet changes the data on many layers without compromising the minimum error rate. The DenseNet conciliates the problem of vanishing gradient issues. The experiment is carried out on the data sets Breast Cancer Wisconsin (Diagnostic) and Breast Cancer Histopathological Dataset (BreakHis). The convolution neural network and the transfer learning have achieved a validation accuracy of 98.3%. The results of these proposed methods show the highest classification rate between the benign and the malignant data. The proposed method improves the efficiency and speed of classification, which is more convenient for discovering breast cancer in earlier stages than the previously proposed methodologies.
Collapse
|
15
|
Gogoi CR, Rahman A, Saikia B, Baruah A. Protein Dihedral Angle Prediction: The State of the Art. ChemistrySelect 2023. [DOI: 10.1002/slct.202203427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
| | - Aziza Rahman
- Department of Chemistry Dibrugarh University Dibrugarh Assam India
| | - Bondeepa Saikia
- Department of Chemistry Dibrugarh University Dibrugarh Assam India
| | - Anupaul Baruah
- Department of Chemistry Dibrugarh University Dibrugarh Assam India
| |
Collapse
|
16
|
Maduranga KDG, Zadorozhnyy V, Ye Q. Symmetry-structured convolutional neural networks. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-08168-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
17
|
Vasdev D, Gupta V, Shubham S, Chaudhary A, Jain N, Salimi M, Ahmadian A. Periapical dental X-ray image classification using deep neural networks. ANNALS OF OPERATIONS RESEARCH 2022; 326:1-29. [PMID: 36157976 PMCID: PMC9483455 DOI: 10.1007/s10479-022-04961-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/29/2022] [Indexed: 06/16/2023]
Abstract
This paper studies the problem of detection of dental diseases. Dental problems affect the vast majority of the world's population. Caries, RCT (Root Canal Treatment), Abscess, Bone Loss, and missing teeth are some of the most common dental conditions that affect people of all ages all over the world. Delayed or incorrect diagnosis may result in mistreatment, affecting not only an individual's oral health but also his or her overall health, thereby making it an important research area in medicine and engineering. We propose a pipelined Deep Neural Network (DNN) approach to detect healthy and non-healthy periapical dental X-ray images. Even a minor enhancement or improvement in existing techniques can go a long way in providing significant health benefits in the medical field. This paper has made a successful attempt to contribute a different type of pipelined approach using AlexNet in this regard. The approach is trained on a large dataset of 16,000 dental X-ray images, correctly identifying healthy and non-healthy X-ray images. We use an optimized Convolutional Neural Networks and three state-of-the-art DNN models, namely Res-Net-18, ResNet-34, and AlexNet for disease classification. In our study, the AlexNet model outperforms the other models with an accuracy of 0.852. The precision, recall and F1 scores of AlexNet also surpass the other models with a score of 0.850 across all metrics. The area under ROC curve also signifies that both the false-positive rate and false-negative rate are low. We conclude that even with a big data set and raw X-ray pictures, the AlexNet model generalizes effectively to previously unseen data and can aid in the diagnosis of a variety of dental diseases.
Collapse
Affiliation(s)
- Dipit Vasdev
- Department of Computer Science and Engineering, Bharati Vidyapeeth’s College of Engineering, New Delhi, India
| | - Vedika Gupta
- Jindal Global Business School, O.P. Jindal Global University, Sonipat, Haryana 131001 India
| | - Shubham Shubham
- Department of Computer Science and Engineering, Bharati Vidyapeeth’s College of Engineering, New Delhi, India
| | - Ankit Chaudhary
- Department of Computer Science and Engineering, Bharati Vidyapeeth’s College of Engineering, New Delhi, India
| | - Nikita Jain
- Department of Computer Science and Engineering, Bharati Vidyapeeth’s College of Engineering, New Delhi, India
| | - Mehdi Salimi
- Department of Mathematics and Statistics, St. Francis Xavier University, Antigonish, NS Canada
- Center for Dynamics, Faculty of Mathematics, Technische Universität Dresden, Dresden, Germany
| | - Ali Ahmadian
- Department of Law, Economics and Human Sciences and Decisions Lab, Mediterranea University of Reggio Calabria, 89125 Reggio Calabria, Italy
- Department of Mathematics, Near East University, Nicosia, TRNC, Mersin 10, Turkey
| |
Collapse
|
18
|
Yu Z, Lan K, Liu Z, Han G. Progressive Ensemble Kernel-Based Broad Learning System for Noisy Data Classification. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:9656-9669. [PMID: 33784632 DOI: 10.1109/tcyb.2021.3064821] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The broad learning system (BLS) is an algorithm that facilitates feature representation learning and data classification. Although weights of BLS are obtained by analytical computation, which brings better generalization and higher efficiency, BLS suffers from two drawbacks: 1) the performance depends on the number of hidden nodes, which requires manual tuning, and 2) double random mappings bring about the uncertainty, which leads to poor resistance to noise data, as well as unpredictable effects on performance. To address these issues, a kernel-based BLS (KBLS) method is proposed by projecting feature nodes obtained from the first random mapping into kernel space. This manipulation reduces the uncertainty, which contributes to performance improvements with the fixed number of hidden nodes, and indicates that manually tuning is no longer needed. Moreover, to further improve the stability and noise resistance of KBLS, a progressive ensemble framework is proposed, in which the residual of the previous base classifiers is used to train the following base classifier. We conduct comparative experiments against the existing state-of-the-art hierarchical learning methods on multiple noisy real-world datasets. The experimental results indicate our approaches achieve the best or at least comparable performance in terms of accuracy.
Collapse
|
19
|
Mohammed MA, Al-Khateeb B, Yousif M, Mostafa SA, Kadry S, Abdulkareem KH, Garcia-Zapirain B. Novel Crow Swarm Optimization Algorithm and Selection Approach for Optimal Deep Learning COVID-19 Diagnostic Model. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1307944. [PMID: 35996653 PMCID: PMC9392599 DOI: 10.1155/2022/1307944] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/01/2022] [Revised: 03/16/2022] [Accepted: 07/19/2022] [Indexed: 02/07/2023]
Abstract
Due to the COVID-19 pandemic, computerized COVID-19 diagnosis studies are proliferating. The diversity of COVID-19 models raises the questions of which COVID-19 diagnostic model should be selected and which decision-makers of healthcare organizations should consider performance criteria. Because of this, a selection scheme is necessary to address all the above issues. This study proposes an integrated method for selecting the optimal deep learning model based on a novel crow swarm optimization algorithm for COVID-19 diagnosis. The crow swarm optimization is employed to find an optimal set of coefficients using a designed fitness function for evaluating the performance of the deep learning models. The crow swarm optimization is modified to obtain a good selected coefficient distribution by considering the best average fitness. We have utilized two datasets: the first dataset includes 746 computed tomography images, 349 of them are of confirmed COVID-19 cases and the other 397 are of healthy individuals, and the second dataset are composed of unimproved computed tomography images of the lung for 632 positive cases of COVID-19 with 15 trained and pretrained deep learning models with nine evaluation metrics are used to evaluate the developed methodology. Among the pretrained CNN and deep models using the first dataset, ResNet50 has an accuracy of 91.46% and a F1-score of 90.49%. For the first dataset, the ResNet50 algorithm is the optimal deep learning model selected as the ideal identification approach for COVID-19 with the closeness overall fitness value of 5715.988 for COVID-19 computed tomography lung images case considered differential advancement. In contrast, the VGG16 algorithm is the optimal deep learning model is selected as the ideal identification approach for COVID-19 with the closeness overall fitness value of 5758.791 for the second dataset. Overall, InceptionV3 had the lowest performance for both datasets. The proposed evaluation methodology is a helpful tool to assist healthcare managers in selecting and evaluating the optimal COVID-19 diagnosis models based on deep learning.
Collapse
Affiliation(s)
- Mazin Abed Mohammed
- College of Computer Science and Information Technology, University of Anbar, Ramadi 31001, Anbar, Iraq
| | - Belal Al-Khateeb
- College of Computer Science and Information Technology, University of Anbar, Ramadi 31001, Anbar, Iraq
| | - Mohammed Yousif
- Directorate of Regions and Governorates Affairs, Ministry of Youth & Sport, Ramadi 31065, Anbar, Iraq
| | - Salama A. Mostafa
- Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, Johor 86400, Malaysia
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, Kristiansand 4608, Norway
| | - Karrar Hameed Abdulkareem
- College of Agriculture, Al-Muthanna University, Samawah 66001, Iraq
- College of Engineering, University of Warith Al-Anbiyaa, Karbala, Iraq
| | | |
Collapse
|
20
|
Chang J, Chang MF, Angelov N, Hsu CY, Meng HW, Sheng S, Glick A, Chang K, He YR, Lin YB, Wang BY, Ayilavarapu S. Application of deep machine learning for the radiographic diagnosis of periodontitis. Clin Oral Investig 2022; 26:6629-6637. [PMID: 35881240 DOI: 10.1007/s00784-022-04617-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 07/04/2022] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Successful application of deep machine learning could reduce time-consuming and labor-intensive clinical work of calculating the amount of radiographic bone loss (RBL) in diagnosing and treatment planning for periodontitis. This study aimed to test the accuracy of RBL classification by machine learning. MATERIALS AND METHODS A total of 236 patients with standardized full mouth radiographs were included. Each tooth from the periapical films was evaluated by three calibrated periodontists for categorization of RBL and radiographic defect morphology. Each image was pre-processed and augmented to ensure proper data balancing without data pollution, then a novel multitasking InceptionV3 model was applied. RESULTS The model demonstrated an average accuracy of 0.87 ± 0.01 in the categorization of mild (< 15%) or severe (≥ 15%) bone loss with fivefold cross-validation. Sensitivity, specificity, positive predictive, and negative predictive values of the model were 0.86 ± 0.03, 0.88 ± 0.03, 0.88 ± 0.03, and 0.86 ± 0.02, respectively. CONCLUSIONS Application of deep machine learning for the detection of alveolar bone loss yielded promising results in this study. Additional data would be beneficial to enhance model construction and enable better machine learning performance for clinical implementation. CLINICAL RELEVANCE Higher accuracy of radiographic bone loss classification by machine learning can be achieved with more clinical data and proper model construction for valuable clinical application.
Collapse
Affiliation(s)
- Jennifer Chang
- Department of Periodontics and Dental Hygiene, The University of Texas Health Science Center at Houston School of Dentistry, Houston, TX, USA.
| | - Ming-Feng Chang
- Institute of Computational Intelligence, National Yangming Chiaotung University, Taipei, Taiwan
- Department of Computer Science, National Yangming Chiaotung University, Taipei, Taiwan
| | - Nikola Angelov
- Department of Periodontics and Dental Hygiene, The University of Texas Health Science Center at Houston School of Dentistry, Houston, TX, USA
| | - Chih-Yu Hsu
- Institute of Computational Intelligence, National Yangming Chiaotung University, Taipei, Taiwan
| | - Hsiu-Wan Meng
- Department of Periodontics and Dental Hygiene, The University of Texas Health Science Center at Houston School of Dentistry, Houston, TX, USA
| | - Sally Sheng
- Department of Periodontics and Dental Hygiene, The University of Texas Health Science Center at Houston School of Dentistry, Houston, TX, USA
| | - Aaron Glick
- Department of General Practice and Dental Public Health, The University of Texas Health Science Center at Houston School of Dentistry, Houston, TX, USA
| | - Kearny Chang
- Department of Periodontics and Dental Hygiene, The University of Texas Health Science Center at Houston School of Dentistry, Houston, TX, USA
| | - Yun-Ru He
- Institute of Computational Intelligence, National Yangming Chiaotung University, Taipei, Taiwan
| | - Yi-Bing Lin
- Institute of Computational Intelligence, National Yangming Chiaotung University, Taipei, Taiwan
- Department of Computer Science, National Yangming Chiaotung University, Taipei, Taiwan
| | - Bing-Yan Wang
- Department of Periodontics and Dental Hygiene, The University of Texas Health Science Center at Houston School of Dentistry, Houston, TX, USA
| | - Srinivas Ayilavarapu
- Department of Periodontics and Dental Hygiene, The University of Texas Health Science Center at Houston School of Dentistry, Houston, TX, USA
| |
Collapse
|
21
|
Biswas S, Adhikari S, Chawla R, Maiti N, Bhatia D, Phukan P, Mukherjee M. Artificial intelligence enabled non-invasive T-ray imaging technique for early detection of coronavirus infected patients. INFORMATICS IN MEDICINE UNLOCKED 2022; 32:101025. [PMID: 35873921 PMCID: PMC9296229 DOI: 10.1016/j.imu.2022.101025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/12/2022] [Accepted: 07/15/2022] [Indexed: 11/17/2022] Open
Abstract
A new artificial intelligence (AI) supported T-Ray imaging system designed and implemented for non-invasive and non-ionizing screening for coronavirus-affected patients. The new system has the potential to replace the standard conventional X-Ray based imaging modality of virus detection. This research article reports the development of solid state room temperature terahertz source for thermograph study. Exposure time and radiation energy are optimized through several real-time experiments. During its incubation period, Coronavirus stays within the cell of the upper respiratory tract and its presence often causes an increased level of blood supply to the virus-affected cells/inter-cellular region that results in a localized increase of water content in those cells & tissues in comparison to its neighbouring normal cells. Under THz-radiation exposure, the incident energy gets absorbed more in virus-affected cells/inter-cellular region and gets heated; thus, the sharp temperature gradient is observed in the corresponding thermograph study. Additionally, structural changes in virus-affected zones make a significant contribution in getting better contrast in thermographs. Considering the effectiveness of the Artificial Intelligence (AI) analysis tool in various medical diagnoses, the authors have employed an explainable AI-assisted methodology to correctly identify and mark the affected pulmonary region for the developed imaging technique and thus validate the model. This AI-enabled non-ionizing THz-thermography method is expected to address the voids in early COVID diagnosis, at the onset of infection.
Collapse
Affiliation(s)
- Swarnava Biswas
- School of Health Sciences, The Neotia University, Kolkata, West Bengal, India
| | - Saikat Adhikari
- Department of Physics, School of Basic & Applied Sciences, Adamas University, Kolkata, West Bengal, India
| | - Riddhi Chawla
- Medical School, Akfa University, Tashkent, Uzbekistan
| | - Niladri Maiti
- Medical School, Akfa University, Tashkent, Uzbekistan
| | - Dinesh Bhatia
- Department of Biomedical Engineering, North Eastern Hill University, Shillong, Meghalaya, India
| | - Pranjal Phukan
- Department of Radiology and Imaging, North Eastern Indira Gandhi Regional Institute of Health and Medical Sciences, Shillong, Meghalaya, India
| | - Moumita Mukherjee
- Department of Physics, School of Basic & Applied Sciences, Adamas University, Kolkata, West Bengal, India
| |
Collapse
|
22
|
Overview of Deep Learning Models in Biomedical Domain with the Help of R Statistical Software. SERBIAN JOURNAL OF EXPERIMENTAL AND CLINICAL RESEARCH 2022. [DOI: 10.2478/sjecr-2018-0063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Abstract
With the increase in volume of data and presence of structured and unstructured data in the biomedical filed, there is a need for building models which can handle complex & non-linear relations in the data and also predict and classify outcomes with higher accuracy. Deep learning models are one of such models which can handle complex and nonlinear data and are being increasingly used in the biomedical filed in the recent years. Deep learning methodology evolved from artificial neural networks which process the input data through multiple hidden layers with higher level of abstraction. Deep Learning networks are used in various fields such as image processing, speech recognition, fraud deduction, classification and prediction. Objectives of this paper is to provide an overview of Deep Learning Models and its application in the biomedical domain using R Statistical software Deep Learning concepts are illustrated by using the R statistical software package. X-ray Images from NIH datasets used to explain the prediction accuracy of the deep learning models. Deep Learning models helped to classify the outcomes under study with 91% accuracy. The paper provided an overview of Deep Learning Models, its types, its application in biomedical domain. - is paper has shown the effect of deep learning network in classifying images into normal and disease with 91% accuracy with help of the R statistical package.
Collapse
|
23
|
MVS-GCN: A prior brain structure learning-guided multi-view graph convolution network for autism spectrum disorder diagnosis. Comput Biol Med 2022; 142:105239. [DOI: 10.1016/j.compbiomed.2022.105239] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Revised: 01/13/2022] [Accepted: 01/16/2022] [Indexed: 11/22/2022]
|
24
|
Nerve optic segmentation in CT images using a deep learning model and a texture descriptor. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-022-00694-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
AbstractThe increased intracranial pressure (ICP) can be described as an increase in pressure around the brain and can lead to serious health problems. The assessment of ultrasound images is commonly conducted by skilled experts which is a time-consuming approach, but advanced computer-aided diagnosis (CAD) systems can assist the physician to decrease the time of ICP diagnosis. The accurate detection of the nerve optic regions, with drawing a precise slope line behind the eyeball and calculating the diameter of nerve optic, are the main aims of this research. First, the Fuzzy C-mean (FCM) clustering is employed for segmenting the input CT screening images into the different parts. Second, a histogram equalization approach is used for region-based image quality enhancement. Then, the Local Directional Number method (LDN) is used for representing some key information in a new image. Finally, a cascade Convolutional Neural Network (CNN) is employed for nerve optic segmentation by two distinct input images. Comprehensive experiments on the CT screening dataset [The Cancer Imaging Archive (TCIA)] consisting of 1600 images show the competitive results of inaccurate extraction of the brain features. Also, the indexes such as Dice, Specificity, and Precision for the proposed approach are reported 87.7%, 91.3%, and 90.1%, respectively. The final classification results show that the proposed approach effectively and accurately detects the nerve optic and its diameter in comparison with the other methods. Therefore, this method can be used for early diagnose of ICP and preventing the occurrence of serious health problems in patients.
Collapse
|
25
|
Automatic Building Extraction on Satellite Images Using Unet and ResNet50. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:5008854. [PMID: 35222630 PMCID: PMC8881177 DOI: 10.1155/2022/5008854] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 01/11/2022] [Accepted: 01/15/2022] [Indexed: 11/17/2022]
Abstract
Recently, settlement planning and replanning process are becoming the main problem in rapidly growing cities. Unplanned urban settlements are quite common, especially in low-income countries. Building extraction on satellite images poses another problem. The main reason for the problem is that manual building extraction is very difficult and takes a lot of time. Artificial intelligence technology, which has increased significantly today, has the potential to provide building extraction on high-resolution satellite images. This study proposes the differentiation of buildings by image segmentation on high-resolution satellite images with U-net architecture. The open-source Massachusetts building dataset was used as the dataset. The Massachusetts building dataset includes residential buildings of the city of Boston. It was aimed to remove buildings in the high-density city of Boston. In the U-net architecture, image segmentation is performed with different encoders and the results are compared. In line with the work done, 82.2% IoU accuracy was achieved in building segmentation. A high result was obtained with an F1 score of 0.9. A successful image segmentation was achieved with 90% accuracy. This study demonstrated the potential of automatic building extraction with the help of artificial intelligence in high-density residential areas. It has been determined that building mapping can be achieved with high-resolution antenna images with high accuracy achieved.
Collapse
|
26
|
Kethireddy R, Kadiri SR, Gangashetty SV. Deep neural architectures for dialect classification with single frequency filtering and zero-time windowing feature representations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:1077. [PMID: 35232068 DOI: 10.1121/10.0009405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 01/16/2022] [Indexed: 06/14/2023]
Abstract
The goal of this study is to investigate advanced signal processing approaches [single frequency filtering (SFF) and zero-time windowing (ZTW)] with modern deep neural networks (DNNs) [convolution neural networks (CNNs), temporal convolution neural networks (TCN), time-delay neural network (TDNN), and emphasized channel attention, propagation and aggregation in TDNN (ECAPA-TDNN)] for dialect classification of major dialects of English. Previous studies indicated that SFF and ZTW methods provide higher spectro-temporal resolution. To capture the intrinsic variations in articulations among dialects, four feature representations [spectrogram (SPEC), cepstral coefficients, mel filter-bank energies, and mel-frequency cepstral coefficients (MFCCs)] are derived from SFF and ZTW methods. Experiments with and without data augmentation using CNN classifiers revealed that the proposed features performed better than baseline short-time Fourier transform (STFT)-based features on the UT-Podcast database [Hansen, J. H., and Liu, G. (2016). "Unsupervised accent classification for deep data fusion of accent and language information," Speech Commun. 78, 19-33]. Even without data augmentation, all the proposed features showed an approximate improvement of 15%-20% (relative) over best baseline (SPEC-STFT) feature. TCN, TDNN, and ECAPA-TDNN classifiers that capture wider temporal context further improved the performance for many of the proposed and baseline features. Among all the baseline and proposed features, the best performance is achieved with single frequency filtered cepstral coefficients for TCN (81.30%), TDNN (81.53%), and ECAPA-TDNN (85.48%). An investigation of data-driven filters, instead of fixed mel-scale, improved the performance by 2.8% and 1.4% (relatively) for SPEC-STFT and SPEC-SFF, and nearly equal for SPEC-ZTW. To assist related work, we have made the code available ([Kethireddy, R., and Kadiri, S. R. (2022). "Deep neural architectures for dialect classification with single frequency filtering and zero-time windowing feature representations," https://github.com/r39ashmi/e2e_dialect (Last viewed 21 December 2021)].).
Collapse
Affiliation(s)
- Rashmi Kethireddy
- Speech Processing Laboratory, International Institute of Information Technology-Hyderabad (IIIT-H), 500032, India
| | - Sudarsana Reddy Kadiri
- Department of Signal Processing and Acoustics, Aalto University, Otakaari 3, FI-00076 Espoo, Finland
| | | |
Collapse
|
27
|
Lin C, Wang L, Shi L. AAPred-CNN: accurate predictor based on deep convolution neural network for identification of anti-angiogenic peptides. Methods 2022; 204:442-448. [PMID: 35031486 DOI: 10.1016/j.ymeth.2022.01.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 12/28/2021] [Accepted: 01/09/2022] [Indexed: 12/13/2022] Open
Abstract
Recently, deep learning techniques have been developed for various bioactive peptide prediction tasks. However, there are only conventional machine learning-based methods for the prediction of anti-angiogenic peptides (AAP), which play an important role in cancer treatment. The main reason why no deep learning method has been involved in this field is that there are too few experimentally validated AAPs to support the training of deep models but researchers have believed that deep learning seriously depends on the amounts of labeled data. In this paper, as a tentative work, we try to predict AAP by constructing different classical deep learning models and propose the first deep convolution neural network-based predictor (AAPred-CNN) for AAP. Contrary to intuition, the experimental results show that deep learning models can achieve superior or comparable performance to the state-of-the-art model, although they are given a few labeled sequences to train. We also decipher the influence of hyper-parameters and training samples on the performance of deep learning models to help understand how the model work. Furthermore, we also visualize the learned embeddings by dimension reduction to increase the model interpretability and reveal the residue propensity of AAP through the statistics of convolutional features for different residues. In summary, this work demonstrates the powerful representation ability of AAPred-CNNfor AAP prediction, further improving the prediction accuracy of AAP.
Collapse
Affiliation(s)
- Changhang Lin
- School of Big Data and Artificial Intelligence, Fujian Polytechnic Normal University, Fuzhou, China
| | - Lei Wang
- Beidahuang Industry Group General Hospital, Harbin, China.
| | - Lei Shi
- Department of Spine Surgery, Changzheng Hospital, Naval Medical University, Shanghai, China.
| |
Collapse
|
28
|
Schumaker G, Becker A, An G, Badylak S, Johnson S, Jiang P, Vodovotz Y, Cockrell RC. Optical Biopsy Using a Neural Network to Predict Gene Expression From Photos of Wounds. J Surg Res 2021; 270:547-554. [PMID: 34826690 DOI: 10.1016/j.jss.2021.10.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 09/16/2021] [Accepted: 10/09/2021] [Indexed: 01/02/2023]
Abstract
BACKGROUND The clinical characterization of the biological status of complex wounds remains a considerable challenge. Digital photography provides a non-invasive means of obtaining wound information and is currently employed to assess wounds qualitatively. Advances in machine learning (ML) image processing provide a means of identifying "hidden" features in pictures. This pilot study trains a convolutional neural network (CNN) to predict gene expression based on digital photographs of wounds in a canine model of volumetric muscle loss (VML). MATERIALS AND METHODS Images of volumetric muscle loss injuries and tissue biopsies were obtained in a canine model of VML. A CNN was trained to regress gene expression values as a function of the extracted image segment (color and spatial distribution). Performance of the CNN was assessed in a held-back test set of images using Mean Absolute Percentage Error (MAPE). RESULTS The CNN was able to predict the gene expression of certain genes based on digital images, with a MAPE ranging from ∼10% to ∼30%, indicating the presence and identification of distinct, and identifiable patterns in gene expression throughout the wound. CONCLUSIONS These initial results suggest promise for further research regarding this novel use of ML regression on medical images. Specifically, the use of CNNs to determine the mechanistic biological state of a VML wound could aid both the design of future mechanistic interventions and the design of trials to test those therapies. Future work will expand the CNN training and/or test set, with potential expansion to predicting functional gene modules.
Collapse
Affiliation(s)
- Grant Schumaker
- Department of Surgery, University of Vermont, Burlington, Vermont
| | - Andrew Becker
- Department of Surgery, University of Vermont, Burlington, Vermont
| | - Gary An
- Department of Surgery, University of Vermont, Burlington, Vermont
| | - Stephen Badylak
- McGowan Institute of Regenerative Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Scott Johnson
- McGowan Institute of Regenerative Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Peng Jiang
- Center for Gene Regulation in Health and Disease (GRHD)Department of Biological, Geological and Environmental Sciences (BGES) Cleveland State University, Cleveland, OH
| | - Yoram Vodovotz
- McGowan Institute of Regenerative Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania; Department of Surgery, University of Pittsburgh, W944 Biomedical Sciences Tower, Pittsburgh, Pennsylvania
| | - R Chase Cockrell
- Department of Surgery, University of Vermont, Burlington, Vermont.
| |
Collapse
|
29
|
Sil R, Alpana, Roy A, Dasmahapatra M, Dhali D. An intelligent approach for automated argument based legal text recognition and summarization using machine learning. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-189867] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
It is essential to provide a structured data feed to the computer to accomplish any task so that it can process flawlessly to generate the desired output within minimal computational time. Generally, computer programmers should provide a structured data feed to the computer program for its successful execution. The hardcopy document should be scanned to generate its corresponding computer-readable softcopy version of the file. This process also proves to be a budget-friendly approach to disengage human resources from the entire process of record maintenance. Due to this automation, the workload of existing manpower is reduced to a significant level. This concept may prove beneficial for the delivery of any type of services to the ultimate beneficiary (i.e., citizen) in a minimal time frame. The administration has to deal with various issues of citizens due to the pressure of a huge population who seek legal help to resolve their issues, thereby leading to the filing of large numbers of pending legal cases at several courts of the country. To assist the victims with prompt delivery of justice and legal professionals in reducing their workload, this paper proposed a machine learning based automated legal model to enhance the efficiency of the legal support system with an accuracy of 94%.
Collapse
Affiliation(s)
- Riya Sil
- Computer Science & Engineering Department, Adamas University, Kolkata, India
| | - Alpana
- Computer Science & Technology, Manav Rachna University, Faridabad, India
- School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi, India
| | - Abhishek Roy
- Computer Science & Engineering Department, Adamas University, Kolkata, India
| | - Mili Dasmahapatra
- Computer Science & Engineering Department, Adamas University, Kolkata, India
| | - Debojit Dhali
- Computer Science & Engineering Department, Adamas University, Kolkata, India
| |
Collapse
|
30
|
Urago Y, Okamoto H, Kaneda T, Murakami N, Kashihara T, Takemori M, Nakayama H, Iijima K, Chiba T, Kuwahara J, Katsuta S, Nakamura S, Chang W, Saitoh H, Igaki H. Evaluation of auto-segmentation accuracy of cloud-based artificial intelligence and atlas-based models. Radiat Oncol 2021; 16:175. [PMID: 34503533 PMCID: PMC8427857 DOI: 10.1186/s13014-021-01896-1] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 08/26/2021] [Indexed: 01/13/2023] Open
Abstract
Background Contour delineation, a crucial process in radiation oncology, is time-consuming and inaccurate due to inter-observer variation has been a critical issue in this process. An atlas-based automatic segmentation was developed to improve the delineation efficiency and reduce inter-observer variation. Additionally, automated segmentation using artificial intelligence (AI) has recently become available. In this study, auto-segmentations by atlas- and AI-based models for Organs at Risk (OAR) in patients with prostate and head and neck cancer were performed and delineation accuracies were evaluated. Methods Twenty-one patients with prostate cancer and 30 patients with head and neck cancer were evaluated. MIM Maestro was used to apply the atlas-based segmentation. MIM Contour ProtégéAI was used to apply the AI-based segmentation. Three similarity indices, the Dice similarity coefficient (DSC), Hausdorff distance (HD), and mean distance to agreement (MDA), were evaluated and compared with manual delineations. In addition, radiation oncologists visually evaluated the delineation accuracies. Results Among patients with prostate cancer, the AI-based model demonstrated higher accuracy than the atlas-based on DSC, HD, and MDA for the bladder and rectum. Upon visual evaluation, some errors were observed in the atlas-based delineations when the boundary between the small bowel or the seminal vesicle and the bladder was unclear. For patients with head and neck cancer, no significant differences were observed between the two models for almost all OARs, except small delineations such as the optic chiasm and optic nerve. The DSC tended to be lower when the HD and the MDA were smaller in small volume delineations. Conclusions In terms of efficiency, the processing time for head and neck cancers was much shorter than manual delineation. While quantitative evaluation with AI-based segmentation was significantly more accurate than atlas-based for prostate cancer, there was no significant difference for head and neck cancer. According to the results of visual evaluation, less necessity of manual correction in AI-based segmentation indicates that the segmentation efficiency of AI-based model is higher than that of atlas-based model. The effectiveness of the AI-based model can be expected to improve the segmentation efficiency and to significantly shorten the delineation time. Supplementary Information The online version contains supplementary material available at 10.1186/s13014-021-01896-1.
Collapse
Affiliation(s)
- Yuka Urago
- Department of Radiological Sciences, Graduate School of Human Health Sciences, Tokyo Metropolitan University, 7-2-10 Higashi-Ogu, Arakawa-ku, Tokyo, 116-8551, Japan.,Department of Medical Physics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Hiroyuki Okamoto
- Department of Medical Physics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.
| | - Tomoya Kaneda
- Department of Radiation Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Naoya Murakami
- Department of Radiation Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Tairo Kashihara
- Department of Radiation Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Mihiro Takemori
- Department of Radiological Sciences, Graduate School of Human Health Sciences, Tokyo Metropolitan University, 7-2-10 Higashi-Ogu, Arakawa-ku, Tokyo, 116-8551, Japan.,Department of Medical Physics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Hiroki Nakayama
- Department of Radiological Sciences, Graduate School of Human Health Sciences, Tokyo Metropolitan University, 7-2-10 Higashi-Ogu, Arakawa-ku, Tokyo, 116-8551, Japan.,Department of Medical Physics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Kotaro Iijima
- Department of Medical Physics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Takahito Chiba
- Department of Medical Physics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Junichi Kuwahara
- Department of Medical Physics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.,Department of Radiological Technology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Shouichi Katsuta
- Department of Radiological Technology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Satoshi Nakamura
- Department of Medical Physics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Weishan Chang
- Department of Radiological Sciences, Graduate School of Human Health Sciences, Tokyo Metropolitan University, 7-2-10 Higashi-Ogu, Arakawa-ku, Tokyo, 116-8551, Japan
| | - Hidetoshi Saitoh
- Department of Radiological Sciences, Graduate School of Human Health Sciences, Tokyo Metropolitan University, 7-2-10 Higashi-Ogu, Arakawa-ku, Tokyo, 116-8551, Japan
| | - Hiroshi Igaki
- Department of Radiation Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| |
Collapse
|
31
|
Mackay BS, Marshall K, Grant-Jacob JA, Kanczler J, Eason RW, Oreffo ROC, Mills B. The future of bone regeneration: integrating AI into tissue engineering. Biomed Phys Eng Express 2021; 7. [PMID: 34271556 DOI: 10.1088/2057-1976/ac154f] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 07/16/2021] [Indexed: 01/16/2023]
Abstract
Tissue engineering is a branch of regenerative medicine that harnesses biomaterial and stem cell research to utilise the body's natural healing responses to regenerate tissue and organs. There remain many unanswered questions in tissue engineering, with optimal biomaterial designs still to be developed and a lack of adequate stem cell knowledge limiting successful application. Advances in artificial intelligence (AI), and deep learning specifically, offer the potential to improve both scientific understanding and clinical outcomes in regenerative medicine. With enhanced perception of how to integrate artificial intelligence into current research and clinical practice, AI offers an invaluable tool to improve patient outcome.
Collapse
Affiliation(s)
- Benita S Mackay
- Optoelectronics Research Centre, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, SO17 1BJ, United Kingdom
| | - Karen Marshall
- Bone and Joint Research Group, Centre for Human Development, Stem Cells and Regeneration, Human Development and Health, Faculty of Medicine, University of Southampton, Southampton, SO16 6HW, United Kingdom
| | - James A Grant-Jacob
- Optoelectronics Research Centre, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, SO17 1BJ, United Kingdom
| | - Janos Kanczler
- Bone and Joint Research Group, Centre for Human Development, Stem Cells and Regeneration, Human Development and Health, Faculty of Medicine, University of Southampton, Southampton, SO16 6HW, United Kingdom
| | - Robert W Eason
- Optoelectronics Research Centre, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, SO17 1BJ, United Kingdom.,Institute of Developmental Sciences, Faculty of Life Sciences, University of Southampton, Southampton, SO17 1BJ, United Kingdom
| | - Richard O C Oreffo
- Bone and Joint Research Group, Centre for Human Development, Stem Cells and Regeneration, Human Development and Health, Faculty of Medicine, University of Southampton, Southampton, SO16 6HW, United Kingdom.,Institute of Developmental Sciences, Faculty of Life Sciences, University of Southampton, Southampton, SO17 1BJ, United Kingdom
| | - Ben Mills
- Optoelectronics Research Centre, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, SO17 1BJ, United Kingdom
| |
Collapse
|
32
|
Sekhar A, Biswas S, Hazra R, Sunaniya AK, Mukherjee A, Yang L. Brain tumor classification using fine-tuned GoogLeNet features and machine learning algorithms: IoMT enabled CAD system. IEEE J Biomed Health Inform 2021; 26:983-991. [PMID: 34324425 DOI: 10.1109/jbhi.2021.3100758] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
In the healthcare research community, Internet of Medical Things (IoMT) is transforming the healthcare system into the world of the future internet. In IoMT enabled Computer aided diagnosis (CAD) system, the Health-related information is stored via the internet, and supportive data is provided to the patients. The development of various smart devices is interconnected via the internet, which helps the patient to communicate with a medical expert using IoMT based remote healthcare system for various life threatening diseases, e.g., brain tumors. The brain tumor is one of the most dreadful diseases ever known to human beings. Often, the tumors are predecessors to cancers. The survival rates for these diseases are very low. So, early detection and classification of tumors can save a lot of lives. IoMT enabled CAD system plays a vital role in solving these problems. Deep learning, a new domain in Machine Learning, has attracted a lot of attention in the last few years. The concept of Convolutional Neural Networks (CNNs) has been widely used in this field. In this paper, we have classified brain tumors into three classes, namely glioma, meningioma and pituitary, using transfer learning model. The features of the brain MRI images are extracted using a pre-trained CNN, i.e. GoogLeNet. The features are then classified using classifiers such as softmax, Support Vector Machine (SVM), and K-Nearest Neighbor (K-NN). The proposed model is trained and tested on CE-MRI Figshare dataset. Further, Harvard medical repository dataset images are also considered for the experimental purpose to classify four types of tumors, and the results are compared with the other state-of-the-art models. Performance measures such as accuracy, precision, recall, specificity, and F1 score are examined to evaluate the performances of the proposed model.
Collapse
|
33
|
Li L, Zhu H, Wen L, Lan W, Yang Z. An Approach of Combining Convolution Neural Network and Graph Convolution Network to Predict the Progression of Myopia. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10576-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
34
|
Kumar A, Sharma K, Sharma A. Hierarchical deep neural network for mental stress state detection using IoT based biomarkers. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2021.01.030] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
35
|
Multi-Scale Convolutional Recurrent Neural Network for Bearing Fault Detection in Noisy Manufacturing Environments. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11093963] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
The failure of a facility to produce a product can have significant impacts on the quality of the product. Most equipment failures occur in rotating equipment, with bearing damage being the biggest cause of failure in rotating equipment. In this paper, we propose a denoising autoencoder (DAE) and multi-scale convolution recurrent neural network (MS-CRNN), wherein the DAE accurately inspects bearing defects in the same environment as bearing vibration signals in the field, and the MS-CRNN inspects and classifies defects. We experimented with adding random noise to create a dataset that resembled noisy manufacturing installations in the field. From the results of the experiment, the accuracy of the proposed method was more than 90%, proving that it is an algorithm that can be applied in the field.
Collapse
|
36
|
Chan HP, Hadjiiski LM, Samala RK. Computer-aided diagnosis in the era of deep learning. Med Phys 2021; 47:e218-e227. [PMID: 32418340 DOI: 10.1002/mp.13764] [Citation(s) in RCA: 122] [Impact Index Per Article: 30.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Revised: 05/13/2019] [Accepted: 05/13/2019] [Indexed: 12/15/2022] Open
Abstract
Computer-aided diagnosis (CAD) has been a major field of research for the past few decades. CAD uses machine learning methods to analyze imaging and/or nonimaging patient data and makes assessment of the patient's condition, which can then be used to assist clinicians in their decision-making process. The recent success of the deep learning technology in machine learning spurs new research and development efforts to improve CAD performance and to develop CAD for many other complex clinical tasks. In this paper, we discuss the potential and challenges in developing CAD tools using deep learning technology or artificial intelligence (AI) in general, the pitfalls and lessons learned from CAD in screening mammography and considerations needed for future implementation of CAD or AI in clinical use. It is hoped that the past experiences and the deep learning technology will lead to successful advancement and lasting growth in this new era of CAD, thereby enabling CAD to deliver intelligent aids to improve health care.
Collapse
Affiliation(s)
- Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109-5842, USA
| | - Lubomir M Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109-5842, USA
| | - Ravi K Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109-5842, USA
| |
Collapse
|
37
|
Mun SK, Wong KH, Lo SCB, Li Y, Bayarsaikhan S. Artificial Intelligence for the Future Radiology Diagnostic Service. Front Mol Biosci 2021; 7:614258. [PMID: 33585563 PMCID: PMC7875875 DOI: 10.3389/fmolb.2020.614258] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 12/29/2020] [Indexed: 12/18/2022] Open
Abstract
Radiology historically has been a leader of digital transformation in healthcare. The introduction of digital imaging systems, picture archiving and communication systems (PACS), and teleradiology transformed radiology services over the past 30 years. Radiology is again at the crossroad for the next generation of transformation, possibly evolving as a one-stop integrated diagnostic service. Artificial intelligence and machine learning promise to offer radiology new powerful new digital tools to facilitate the next transformation. The radiology community has been developing computer-aided diagnosis (CAD) tools based on machine learning (ML) over the past 20 years. Among various AI techniques, deep-learning convolutional neural networks (CNN) and its variants have been widely used in medical image pattern recognition. Since the 1990s, many CAD tools and products have been developed. However, clinical adoption has been slow due to a lack of substantial clinical advantages, difficulties integrating into existing workflow, and uncertain business models. This paper proposes three pathways for AI's role in radiology beyond current CNN based capabilities 1) improve the performance of CAD, 2) improve the productivity of radiology service by AI-assisted workflow, and 3) develop radiomics that integrate the data from radiology, pathology, and genomics to facilitate the emergence of a new integrated diagnostic service.
Collapse
Affiliation(s)
- Seong K. Mun
- Arlington Innovation Center:Health Research, Virginia Tech-Washington DC Area, Arlington, VA, United States
| | | | | | | | | |
Collapse
|
38
|
Xu Y, Rather AM, Song S, Fang JC, Dupont RL, Kara UI, Chang Y, Paulson JA, Qin R, Bao X, Wang X. Ultrasensitive and Selective Detection of SARS-CoV-2 Using Thermotropic Liquid Crystals and Image-Based Machine Learning. CELL REPORTS. PHYSICAL SCIENCE 2020; 1:100276. [PMID: 33225318 PMCID: PMC7670228 DOI: 10.1016/j.xcrp.2020.100276] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 10/01/2020] [Accepted: 11/06/2020] [Indexed: 05/03/2023]
Abstract
Rapid, robust virus-detection techniques with ultrahigh sensitivity and selectivity are required for the outbreak of the pandemic coronavirus disease 2019 (COVID-19) caused by the severe acute respiratory syndrome-coronavirus-2 (SARS-CoV-2). Here, we report that the femtomolar concentrations of single-stranded ribonucleic acid (ssRNA) of SARS-CoV-2 trigger ordering transitions in liquid crystal (LC) films decorated with cationic surfactant and complementary 15-mer single-stranded deoxyribonucleic acid (ssDNA) probe. More importantly, the sensitivity of the LC to the SARS ssRNA, with a 3-bp mismatch compared to the SARS-CoV-2 ssRNA, is measured to decrease by seven orders of magnitude, suggesting that the LC ordering transitions depend strongly on the targeted oligonucleotide sequence. Finally, we design a LC-based diagnostic kit and a smartphone-based application (app) to enable automatic detection of SARS-CoV-2 ssRNA, which could be used for reliable self-test of SARS-CoV-2 at home without the need for complex equipment or procedures.
Collapse
Affiliation(s)
- Yang Xu
- William G. Lowrie Department of Chemical and Biomolecular Engineering, The Ohio State University, Columbus, OH 43210, USA
| | - Adil M Rather
- William G. Lowrie Department of Chemical and Biomolecular Engineering, The Ohio State University, Columbus, OH 43210, USA
| | - Shuang Song
- Department of Civil, Environmental and Geodetic Engineering, The Ohio State University, Columbus, OH 43210, USA
| | - Jen-Chun Fang
- William G. Lowrie Department of Chemical and Biomolecular Engineering, The Ohio State University, Columbus, OH 43210, USA
| | - Robert L Dupont
- William G. Lowrie Department of Chemical and Biomolecular Engineering, The Ohio State University, Columbus, OH 43210, USA
| | - Ufuoma I Kara
- William G. Lowrie Department of Chemical and Biomolecular Engineering, The Ohio State University, Columbus, OH 43210, USA
| | - Yun Chang
- Davidson School of Chemical Engineering, Purdue University, West Lafayette, IN 47907, USA
| | - Joel A Paulson
- William G. Lowrie Department of Chemical and Biomolecular Engineering, The Ohio State University, Columbus, OH 43210, USA
- Sustainability Institute, The Ohio State University, Columbus, OH 43210, USA
| | - Rongjun Qin
- Department of Civil, Environmental and Geodetic Engineering, The Ohio State University, Columbus, OH 43210, USA
- Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43210, USA
- Translational Data Analytics Institute, The Ohio State University, Columbus, OH 43210, USA
| | - Xiaoping Bao
- Davidson School of Chemical Engineering, Purdue University, West Lafayette, IN 47907, USA
| | - Xiaoguang Wang
- William G. Lowrie Department of Chemical and Biomolecular Engineering, The Ohio State University, Columbus, OH 43210, USA
- Sustainability Institute, The Ohio State University, Columbus, OH 43210, USA
| |
Collapse
|
39
|
|
40
|
Yu L, Qin Z, Zhuang T, Ding Y, Qin Z, Raymond Choo KK. A framework for hierarchical division of retinal vascular networks. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.11.113] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
41
|
From chemical structure to quantitative polymer properties prediction through convolutional neural networks. POLYMER 2020. [DOI: 10.1016/j.polymer.2020.122341] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
42
|
Chan HP, Samala RK, Hadjiiski LM. CAD and AI for breast cancer-recent development and challenges. Br J Radiol 2020; 93:20190580. [PMID: 31742424 PMCID: PMC7362917 DOI: 10.1259/bjr.20190580] [Citation(s) in RCA: 80] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Revised: 11/13/2019] [Accepted: 11/17/2019] [Indexed: 12/15/2022] Open
Abstract
Computer-aided diagnosis (CAD) has been a popular area of research and development in the past few decades. In CAD, machine learning methods and multidisciplinary knowledge and techniques are used to analyze the patient information and the results can be used to assist clinicians in their decision making process. CAD may analyze imaging information alone or in combination with other clinical data. It may provide the analyzed information directly to the clinician or correlate the analyzed results with the likelihood of certain diseases based on statistical modeling of the past cases in the population. CAD systems can be developed to provide decision support for many applications in the patient care processes, such as lesion detection, characterization, cancer staging, treatment planning and response assessment, recurrence and prognosis prediction. The new state-of-the-art machine learning technique, known as deep learning (DL), has revolutionized speech and text recognition as well as computer vision. The potential of major breakthrough by DL in medical image analysis and other CAD applications for patient care has brought about unprecedented excitement of applying CAD, or artificial intelligence (AI), to medicine in general and to radiology in particular. In this paper, we will provide an overview of the recent developments of CAD using DL in breast imaging and discuss some challenges and practical issues that may impact the advancement of artificial intelligence and its integration into clinical workflow.
Collapse
Affiliation(s)
- Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | - Ravi K. Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | | |
Collapse
|
43
|
Wang S, Dong L, Wang X, Wang X. Classification of Pathological Types of Lung Cancer from CT Images by Deep Residual Neural Networks with Transfer Learning Strategy. Open Med (Wars) 2020; 15:190-197. [PMID: 32190744 PMCID: PMC7065426 DOI: 10.1515/med-2020-0028] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Accepted: 12/24/2019] [Indexed: 11/15/2022] Open
Abstract
Lung cancer is one of the most harmful malignant tumors to human health. The accurate judgment of the pathological type of lung cancer is vital for treatment. Traditionally, the pathological type of lung cancer requires a histopathological examination to determine, which is invasive and time consuming. In this work, a novel residual neural network is proposed to identify the pathological type of lung cancer via CT images. Due to the low amount of CT images in practice, we explored a medical-to-medical transfer learning strategy. Specifically, a residual neural network is pre-trained on public medical images dataset luna16, and then fine-tuned on our intellectual property lung cancer dataset collected in Shandong Provincial Hospital. Data experiments show that our method achieves 85.71% accuracy in identifying pathological types of lung cancer from CT images and outperforming other models trained with 2054 labels. Our method performs better than AlexNet, VGG16 and DenseNet, which provides an efficient, non-invasive detection tool for pathological diagnosis.
Collapse
Affiliation(s)
- Shudong Wang
- College of Computer and Communication Engineering, China University of Petroleum, Qingdao 266580, Shandong, China.,School of Electrical Engineering and Automation, Tiangong University, Tianjin 300387, China
| | - Liyuan Dong
- College of Computer and Communication Engineering, China University of Petroleum, Qingdao 266580, Shandong, China
| | - Xun Wang
- College of Computer and Communication Engineering, China University of Petroleum, Qingdao 266580, Shandong, China.,School of Electrical Engineering and Automation, Tiangong University, Tianjin 300387, China
| | - Xingguang Wang
- Department of Respiratory Medicine, Shandong Provincial Hospital Affiliated to Shandong University, Jinan 250021, Shandong, China
| |
Collapse
|
44
|
Stamate D, Smith R, Tsygancov R, Vorobev R, Langham J, Stahl D, Reeves D. Applying Deep Learning to Predicting Dementia and Mild Cognitive Impairment. IFIP ADVANCES IN INFORMATION AND COMMUNICATION TECHNOLOGY 2020. [PMCID: PMC7256597 DOI: 10.1007/978-3-030-49186-4_26] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Dementia has a large negative impact on the global healthcare and society. Diagnosis is rather challenging as there is no standardised test. The purpose of this paper is to conduct an analysis on ADNI data and determine its effectiveness for building classification models to differentiate the categories Cognitively Normal (CN), Mild Cognitive Impairment (MCI), and Dementia (DEM), based on tuning three Deep Learning models: two Multi-Layer Perceptron (MLP1 and MLP2) models and a Convolutional Bidirectional Long Short-Term Memory (ConvBLSTM) model. The results show that the MLP1 and MLP2 models accurately distinguish the DEM, MCI and CN classes, with accuracies as high as 0.86 (SD 0.01). The ConvBLSTM model was slightly less accurate but was explored in view of comparisons with the MLP models, and for future extensions of this work that will take advantage of time-related information. Although the performance of ConvBLSTM model was negatively impacted by a lack of visit code data, opportunities were identified for improvement, particularly in terms of pre-processing.
Collapse
|
45
|
Zak M, Krzyżak A. Classification of Lung Diseases Using Deep Learning Models. LECTURE NOTES IN COMPUTER SCIENCE 2020. [PMCID: PMC7304013 DOI: 10.1007/978-3-030-50420-5_47] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/30/2022]
|
46
|
Vijh S, Sharma S, Gaurav P. Brain Tumor Segmentation Using OTSU Embedded Adaptive Particle Swarm Optimization Method and Convolutional Neural Network. LECTURE NOTES ON DATA ENGINEERING AND COMMUNICATIONS TECHNOLOGIES 2020:171-194. [DOI: 10.1007/978-3-030-25797-2_8] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
47
|
Chan HP, Samala RK, Hadjiiski LM, Zhou C. Deep Learning in Medical Image Analysis. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2020; 1213:3-21. [PMID: 32030660 PMCID: PMC7442218 DOI: 10.1007/978-3-030-33128-3_1] [Citation(s) in RCA: 311] [Impact Index Per Article: 62.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Deep learning is the state-of-the-art machine learning approach. The success of deep learning in many pattern recognition applications has brought excitement and high expectations that deep learning, or artificial intelligence (AI), can bring revolutionary changes in health care. Early studies of deep learning applied to lesion detection or classification have reported superior performance compared to those by conventional techniques or even better than radiologists in some tasks. The potential of applying deep-learning-based medical image analysis to computer-aided diagnosis (CAD), thus providing decision support to clinicians and improving the accuracy and efficiency of various diagnostic and treatment processes, has spurred new research and development efforts in CAD. Despite the optimism in this new era of machine learning, the development and implementation of CAD or AI tools in clinical practice face many challenges. In this chapter, we will discuss some of these issues and efforts needed to develop robust deep-learning-based CAD tools and integrate these tools into the clinical workflow, thereby advancing towards the goal of providing reliable intelligent aids for patient care.
Collapse
Affiliation(s)
- Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, USA.
| | - Ravi K Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| | | | - Chuan Zhou
- Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
48
|
Using multi-layer perceptron with Laplacian edge detector for bladder cancer diagnosis. Artif Intell Med 2019; 102:101746. [PMID: 31980088 DOI: 10.1016/j.artmed.2019.101746] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Revised: 10/22/2019] [Accepted: 10/27/2019] [Indexed: 12/26/2022]
Abstract
In this paper, the urinary bladder cancer diagnostic method which is based on Multi-Layer Perceptron and Laplacian edge detector is presented. The aim of this paper is to investigate the implementation possibility of a simpler method (Multi-Layer Perceptron) alongside commonly used methods, such as Deep Learning Convolutional Neural Networks, for the urinary bladder cancer detection. The dataset used for this research consisted of 1997 images of bladder cancer and 986 images of non-cancer tissue. The results of the conducted research showed that using Multi-Layer Perceptron trained and tested with images pre-processed with Laplacian edge detector are achieving AUC value up to 0.99. When different image sizes are compared it can be seen that the best results are achieved if 50×50 and 100×100 images were used.
Collapse
|
49
|
Detection of focal epilepsy in brain maps through a novel pattern recognition technique. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04544-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
50
|
Park S, Baek SS, Pyo J, Pachepsky Y, Park J, Cho KH. Deep neural networks for modeling fouling growth and flux decline during NF/RO membrane filtration. J Memb Sci 2019. [DOI: 10.1016/j.memsci.2019.06.004] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|