1
|
Methods in Medicine CAM. Retracted: Evaluating Deep Neural Network Architectures with Transfer Learning for Pneumonitis Diagnosis. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2023; 2023:9858237. [PMID: 37601271 PMCID: PMC10432736 DOI: 10.1155/2023/9858237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 08/08/2023] [Indexed: 08/22/2023]
Abstract
[This retracts the article DOI: 10.1155/2021/8036304.].
Collapse
|
2
|
Albashish D. Ensemble of adapted convolutional neural networks (CNN) methods for classifying colon histopathological images. PeerJ Comput Sci 2022; 8:e1031. [PMID: 35875641 PMCID: PMC9299234 DOI: 10.7717/peerj-cs.1031] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 06/17/2022] [Indexed: 06/15/2023]
Abstract
Deep convolutional neural networks (CNN) manifest the potential for computer-aided diagnosis systems (CADs) by learning features directly from images rather than using traditional feature extraction methods. Nevertheless, due to the limited sample sizes and heterogeneity in tumor presentation in medical images, CNN models suffer from training issues, including training from scratch, which leads to overfitting. Alternatively, a pre-trained neural network's transfer learning (TL) is used to derive tumor knowledge from medical image datasets using CNN that were designed for non-medical activations, alleviating the need for large datasets. This study proposes two ensemble learning techniques: E-CNN (product rule) and E-CNN (majority voting). These techniques are based on the adaptation of the pretrained CNN models to classify colon cancer histopathology images into various classes. In these ensembles, the individuals are, initially, constructed by adapting pretrained DenseNet121, MobileNetV2, InceptionV3, and VGG16 models. The adaptation of these models is based on a block-wise fine-tuning policy, in which a set of dense and dropout layers of these pretrained models is joined to explore the variation in the histology images. Then, the models' decisions are fused via product rule and majority voting aggregation methods. The proposed model was validated against the standard pretrained models and the most recent works on two publicly available benchmark colon histopathological image datasets: Stoean (357 images) and Kather colorectal histology (5,000 images). The results were 97.20% and 91.28% accurate, respectively. The achieved results outperformed the state-of-the-art studies and confirmed that the proposed E-CNNs could be extended to be used in various medical image applications.
Collapse
Affiliation(s)
- Dheeb Albashish
- Computer Science Department/ Prince Abdullah bin Ghazi Faculty of Information and Communication Technology, Al-Balqa Applied University, Alsalt, Jordan
| |
Collapse
|
3
|
Roger R, Hilmes MA, Williams JM, Moore DJ, Powers AC, Craddock RC, Virostko J. Deep learning-based pancreas volume assessment in individuals with type 1 diabetes. BMC Med Imaging 2022; 22:5. [PMID: 34986790 PMCID: PMC8734282 DOI: 10.1186/s12880-021-00729-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 12/10/2021] [Indexed: 01/11/2023] Open
Abstract
Pancreas volume is reduced in individuals with diabetes and in autoantibody positive individuals at high risk for developing type 1 diabetes (T1D). Studies investigating pancreas volume are underway to assess pancreas volume in large clinical databases and studies, but manual pancreas annotation is time-consuming and subjective, preventing extension to large studies and databases. This study develops deep learning for automated pancreas volume measurement in individuals with diabetes. A convolutional neural network was trained using manual pancreas annotation on 160 abdominal magnetic resonance imaging (MRI) scans from individuals with T1D, controls, or a combination thereof. Models trained using each cohort were then tested on scans of 25 individuals with T1D. Deep learning and manual segmentations of the pancreas displayed high overlap (Dice coefficient = 0.81) and excellent correlation of pancreas volume measurements (R2 = 0.94). Correlation was highest when training data included individuals both with and without T1D. The pancreas of individuals with T1D can be automatically segmented to measure pancreas volume. This algorithm can be applied to large imaging datasets to quantify the spectrum of human pancreas volume.
Collapse
Affiliation(s)
- Raphael Roger
- Department of Diagnostic Medicine, Dell Medical School, University of Texas at Austin, 1701 Trinity St., Stop C0200, Austin, TX, 78712, USA
| | - Melissa A Hilmes
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Jonathan M Williams
- Division of Diabetes, Endocrinology, and Metabolism, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Daniel J Moore
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Pathology, Immunology, and Microbiology, Vanderbilt University, Nashville, TN, USA
| | - Alvin C Powers
- Division of Diabetes, Endocrinology, and Metabolism, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Molecular Physiology and Biophysics, Vanderbilt University, Nashville, TN, USA.,VA Tennessee Valley Healthcare System, Nashville, TN, USA
| | - R Cameron Craddock
- Department of Diagnostic Medicine, Dell Medical School, University of Texas at Austin, 1701 Trinity St., Stop C0200, Austin, TX, 78712, USA
| | - John Virostko
- Department of Diagnostic Medicine, Dell Medical School, University of Texas at Austin, 1701 Trinity St., Stop C0200, Austin, TX, 78712, USA. .,Livestrong Cancer Institutes, University of Texas at Austin, Austin, TX, USA. .,Department of Oncology, University of Texas at Austin, Austin, TX, USA. .,Oden Institute for Computational Engineering and Sciences, University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|
4
|
Narayanasamy SK, Srinivasan K, Mian Qaisar S, Chang CY. Ontology-Enabled Emotional Sentiment Analysis on COVID-19 Pandemic-Related Twitter Streams. Front Public Health 2021; 9:798905. [PMID: 34938715 PMCID: PMC8685242 DOI: 10.3389/fpubh.2021.798905] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Accepted: 11/04/2021] [Indexed: 11/16/2022] Open
Abstract
The exponential growth of social media users has changed the dynamics of retrieving the potential information from user-generated content and transformed the paradigm of information-retrieval mechanism with the novel developments on the concept of “web of data”. In this regard, our proposed Ontology-Based Sentiment Analysis provides two novel approaches: First, the emotion extraction on tweets related to COVID-19 is carried out by a well-formed taxonomy that comprises possible emotional concepts with fine-grained properties and polarized values. Second, the potential entities present in the tweet can be analyzed for semantic associativity. The extraction of emotions can be performed in two cases: (i) words directly associated with the emotional concepts present in the taxonomy and (ii) words indirectly present in the emotional concepts. Though the latter case is very challenging in processing the tweets to find the hidden patterns and extract the meaningful facts associated with it, our proposed work is able to extract and detect almost 81% of true positives and considerably able to detect the false negatives. Finally, the proposed approach's superior performance is witnessed from its comparison with other peer-level approaches.
Collapse
Affiliation(s)
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, India
| | - Saeed Mian Qaisar
- Electrical and Computer Engineering Department, Effat University, Jeddah, Saudi Arabia
| | - Chuan-Yu Chang
- Department of Computer Science and Information Engineering, National Yunlin University of Science and Technology, Yunlin, Taiwan.,Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| |
Collapse
|
5
|
Wu J, Hu R, Xiao Z, Chen J, Liu J. Vision Transformer-based recognition of diabetic retinopathy grade. Med Phys 2021; 48:7850-7863. [PMID: 34693536 DOI: 10.1002/mp.15312] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 10/14/2021] [Accepted: 10/15/2021] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND In the domain of natural language processing, Transformers are recognized as state-of-the-art models, which opposing to typical convolutional neural networks (CNNs) do not rely on convolution layers. Instead, Transformers employ multi-head attention mechanisms as the main building block to capture long-range contextual relations between image pixels. Recently, CNNs dominated the deep learning solutions for diabetic retinopathy grade recognition. However, spurred by the advantages of Transformers, we propose a Transformer-based method that is appropriate for recognizing the grade of diabetic retinopathy. PURPOSE The purposes of this work are to demonstrate that (i) the pure attention mechanism is suitable for diabetic retinopathy grade recognition and (ii) Transformers can replace traditional CNNs for diabetic retinopathy grade recognition. METHODS This paper proposes a Vision Transformer-based method to recognize the grade of diabetic retinopathy. Fundus images are subdivided into non-overlapping patches, which are then converted into sequences by flattening, and undergo a linear and positional embedding process to preserve positional information. Then, the generated sequence is input into several multi-head attention layers to generate the final representation. The first token sequence is input to a softmax classification layer to produce the recognition output in the classification stage. RESULTS The dataset for training and testing employs fundus images of different resolutions, subdivided into patches. We challenge our method against current CNNs and extreme learning machines and achieve an appealing performance. Specifically, the suggested deep learning architecture attains an accuracy of 91.4%, specificity = 0.977 (95% confidence interval (CI) (0.951-1)), precision = 0.928 (95% CI (0.852-1)), sensitivity = 0.926 (95% CI (0.863-0.989)), quadratic weighted kappa score = 0.935, and area under curve (AUC) = 0.986. CONCLUSION Our comparative experiments against current methods conclude that our model is competitive and highlight that an attention mechanism based on a Vision Transformer model is promising for the diabetic retinopathy grade recognition task.
Collapse
Affiliation(s)
- Jianfang Wu
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Ruo Hu
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Zhenghong Xiao
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Jiaxu Chen
- School of Traditional Chinese Medicine, Jinan University, Guangzhou, China
| | | |
Collapse
|