201
|
Feng K, Jiang H, Yin C, Sun H. Gene regulatory network inference based on causal discovery integrating with graph neural network. QUANTITATIVE BIOLOGY 2023; 11:434-450. [DOI: 10.1002/qub2.26] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 04/04/2023] [Indexed: 01/06/2025]
Abstract
AbstractGene regulatory network (GRN) inference from gene expression data is a significant approach to understanding aspects of the biological system. Compared with generalized correlation‐based methods, causality‐inspired ones seem more rational to infer regulatory relationships. We propose GRINCD, a novel GRN inference framework empowered by graph representation learning and causal asymmetric learning, considering both linear and non‐linear regulatory relationships. First, high‐quality representation of each gene is generated using graph neural network. Then, we apply the additive noise model to predict the causal regulation of each regulator‐target pair. Additionally, we design two channels and finally assemble them for robust prediction. Through comprehensive comparisons of our framework with state‐of‐the‐art methods based on different principles on numerous datasets of diverse types and scales, the experimental results show that our framework achieves superior or comparable performance under various evaluation metrics. Our work provides a new clue for constructing GRNs, and our proposed framework GRINCD also shows potential in identifying key factors affecting cancer development.
Collapse
Affiliation(s)
- Ke Feng
- School of Artificial Intelligence Jilin University Changchun China
| | - Hongyang Jiang
- School of Artificial Intelligence Jilin University Changchun China
| | - Chaoyi Yin
- School of Artificial Intelligence Jilin University Changchun China
| | - Huiyan Sun
- School of Artificial Intelligence Jilin University Changchun China
- International Center of Future Science Jilin University Changchun China
- Engineering Research Center of Knowledge‐Driven Human‐Machine Intelligence Ministry of Education Changchun China
| |
Collapse
|
202
|
Liu L, Zhou Y, Lei X. RMDGCN: Prediction of RNA methylation and disease associations based on graph convolutional network with attention mechanism. PLoS Comput Biol 2023; 19:e1011677. [PMID: 38055721 DOI: 10.1371/journal.pcbi.1011677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 11/10/2023] [Indexed: 12/08/2023] Open
Abstract
RNA modification is a post transcriptional modification that occurs in all organisms and plays a crucial role in the stages of RNA life, closely related to many life processes. As one of the newly discovered modifications, N1-methyladenosine (m1A) plays an important role in gene expression regulation, closely related to the occurrence and development of diseases. However, due to the low abundance of m1A, verifying the associations between m1As and diseases through wet experiments requires a great quantity of manpower and resources. In this study, we proposed a computational method for predicting the associations of RNA methylation and disease based on graph convolutional network (RMDGCN) with attention mechanism. We build an adjacency matrix through the collected m1As and diseases associations, and use positive-unlabeled learning to increase the number of positive samples. By extracting the features of m1As and diseases, a heterogeneous network is constructed, and a GCN with attention mechanism is adopted to predict the associations between m1As and diseases. The experimental results indicate that under a 5-fold cross validation, RMDGCN is superior to other methods (AUC = 0.9892 and AUPR = 0.8682). In addition, case studies indicate that RMDGCN can predict the relationships between unknown m1As and diseases. In summary, RMDGCN is an effective method for predicting the associations between m1As and diseases.
Collapse
Affiliation(s)
- Lian Liu
- School of Computer Science, Shaanxi Normal University, Xi'an, Shaanxi, China
| | - Yumeng Zhou
- School of Computer Science, Shaanxi Normal University, Xi'an, Shaanxi, China
| | - Xiujuan Lei
- School of Computer Science, Shaanxi Normal University, Xi'an, Shaanxi, China
| |
Collapse
|
203
|
Li Y, Tang Y, Liu Y, Zheng D. Semi-supervised Counting of Grape Berries in the Field Based on Density Mutual Exclusion. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0115. [PMID: 38033720 PMCID: PMC10684290 DOI: 10.34133/plantphenomics.0115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 10/29/2023] [Indexed: 12/02/2023]
Abstract
Automated counting of grape berries has become one of the most important tasks in grape yield prediction. However, dense distribution of berries and the severe occlusion between berries bring great challenges to counting algorithm based on deep learning. The collection of data required for model training is also a tedious and expensive work. To address these issues and cost-effectively count grape berries, a semi-supervised counting of grape berries in the field based on density mutual exclusion (CDMENet) is proposed. The algorithm uses VGG16 as the backbone to extract image features. Auxiliary tasks based on density mutual exclusion are introduced. The tasks exploit the spatial distribution pattern of grape berries in density levels to make full use of unlabeled data. In addition, a density difference loss is designed. The feature representation is enhanced by amplifying the difference of features between different density levels. The experimental results on the field grape berry dataset show that CDMENet achieves less counting errors. Compared with the state of the arts, coefficient of determination (R2) is improved by 6.10%, and mean absolute error and root mean square error are reduced by 49.36% and 54.08%, respectively. The code is available at https://github.com/youth-tang/CDMENet-main.
Collapse
Affiliation(s)
- Yanan Li
- School of Computer Science and Engineering, School of Artificial Intelligence,
Wuhan Institute of Technology, Wuhan 430205, China
- Hubei Key Laboratory of Intelligent Robot,
Wuhan Institute of Technology, Wuhan 430073, China
| | - Yuling Tang
- School of Computer Science and Engineering, School of Artificial Intelligence,
Wuhan Institute of Technology, Wuhan 430205, China
- Hubei Key Laboratory of Intelligent Robot,
Wuhan Institute of Technology, Wuhan 430073, China
| | - Yifei Liu
- School of Computer Science and Engineering, School of Artificial Intelligence,
Wuhan Institute of Technology, Wuhan 430205, China
- Hubei Key Laboratory of Intelligent Robot,
Wuhan Institute of Technology, Wuhan 430073, China
| | - Dingrun Zheng
- School of Computer Science and Engineering, School of Artificial Intelligence,
Wuhan Institute of Technology, Wuhan 430205, China
- Hubei Key Laboratory of Intelligent Robot,
Wuhan Institute of Technology, Wuhan 430073, China
| |
Collapse
|
204
|
Gao R, Luo G, Ding R, Yang B, Sun H. A Lightweight Deep Learning Framework for Automatic MRI Data Sorting and Artifacts Detection. J Med Syst 2023; 47:124. [PMID: 37999807 DOI: 10.1007/s10916-023-02017-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 11/05/2023] [Indexed: 11/25/2023]
Abstract
The purpose of this study is to develop a lightweight and easily deployable deep learning system for fully automated content-based brain MRI sorting and artifacts detection. 22092 MRI volumes from 4076 patients between 2017 and 2021 were involved in this retrospective study. The dataset mainly contains 4 common contrast (T1-weighted (T1w), contrast-enhanced T1-weighted (T1c), T2-weighted (T2w), fluid-attenuated inversion recovery (FLAIR)) in three perspectives (axial, coronal, and sagittal), and magnetic resonance angiography (MRA), as well as three typical artifacts (motion, aliasing, and metal artifacts). In the proposed architecture, a pre-trained EfficientNetB0 with the fully connected layers removed was used as the feature extractor and a multilayer perceptron (MLP) module with four hidden layers was used as the classifier. Precision, recall, F1_Score, accuracy, the number of trainable parameters, and float-point of operations (FLOPs) were calculated to evaluate the performance of the proposed model. The proposed model was also compared with four other existing CNN-based models in terms of classification performance and model size. The overall precision, recall, F1_Score, and accuracy of the proposed model were 0.983, 0.926, 0.950, and 0.991, respectively. The performance of the proposed model was outperformed the other four CNN-based models. The number of trainable parameters and FLOPs were the smallest among the investigated models. Our proposed model can accurately sort head MRI scans and identify artifacts with minimum computational resources and can be used as a tool to support big medical imaging data research and facilitate large-scale database management.
Collapse
Affiliation(s)
- Ronghui Gao
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Guoting Luo
- Department of Radiology, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| | - Renxin Ding
- IT Center, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Bo Yang
- IT Center, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Huaiqiang Sun
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, Sichuan, China.
- Huaxi MR Research Center, West China Hospital of Sichuan University, Chengdu, Sichuan, China.
- Huaxi MR Research Center, Department of Radiology, West China Hospital of Sichuan University, Chengdu, 610041, Sichuan, China.
| |
Collapse
|
205
|
Wu X, Liu Y. Predicting Gas Adsorption without the Knowledge of Pore Structures: A Machine Learning Method Based on Classical Density Functional Theory. J Phys Chem Lett 2023; 14:10094-10102. [PMID: 37921618 DOI: 10.1021/acs.jpclett.3c02708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
Predicting gas adsorption from the pore structure is an intuitive and widely used methodology in adsorption. However, in real-world systems, the structural information is usually very complicated and can only be approximately obtained from the characterization data. In this work, we developed a machine learning (ML) method to predict gas adsorption form the raw characterization data of N2 adsorption. The ML method is modeled by a convolutional neural network and trained by a large number of data that are generated from a classical density functional theory, and the model gives a very accurate prediction of Ar adsorption. Though the training set is limited to modeling slit pores, the model can be applied to three-dimensional structured pores and real-world materials. The good agreements suggest that there is a universal relationship among adsorption isotherms of different adsorbates, which could be captured by the ML model.
Collapse
Affiliation(s)
- Xiangkun Wu
- School of Chemical Engineering and Technology, Sun Yat-sen University, Zhuhai 519082, China
| | - Yu Liu
- School of Chemical Engineering and Technology, Sun Yat-sen University, Zhuhai 519082, China
| |
Collapse
|
206
|
Dehghan Rouzi M, Moshiri B, Khoshnevisan M, Akhaee MA, Jaryani F, Salehi Nasab S, Lee M. Breast Cancer Detection with an Ensemble of Deep Learning Networks Using a Consensus-Adaptive Weighting Method. J Imaging 2023; 9:247. [PMID: 37998094 PMCID: PMC10671922 DOI: 10.3390/jimaging9110247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 10/20/2023] [Accepted: 10/24/2023] [Indexed: 11/25/2023] Open
Abstract
Breast cancer's high mortality rate is often linked to late diagnosis, with mammograms as key but sometimes limited tools in early detection. To enhance diagnostic accuracy and speed, this study introduces a novel computer-aided detection (CAD) ensemble system. This system incorporates advanced deep learning networks-EfficientNet, Xception, MobileNetV2, InceptionV3, and Resnet50-integrated via our innovative consensus-adaptive weighting (CAW) method. This method permits the dynamic adjustment of multiple deep networks, bolstering the system's detection capabilities. Our approach also addresses a major challenge in pixel-level data annotation of faster R-CNNs, highlighted in a prominent previous study. Evaluations on various datasets, including the cropped DDSM (Digital Database for Screening Mammography), DDSM, and INbreast, demonstrated the system's superior performance. In particular, our CAD system showed marked improvement on the cropped DDSM dataset, enhancing detection rates by approximately 1.59% and achieving an accuracy of 95.48%. This innovative system represents a significant advancement in early breast cancer detection, offering the potential for more precise and timely diagnosis, ultimately fostering improved patient outcomes.
Collapse
Affiliation(s)
- Mohammad Dehghan Rouzi
- School of Electrical and computer Engineering, College of Engineering, University of Tehran, Tehran 14174-66191, Iran; (M.D.R.); (B.M.); (M.A.A.)
| | - Behzad Moshiri
- School of Electrical and computer Engineering, College of Engineering, University of Tehran, Tehran 14174-66191, Iran; (M.D.R.); (B.M.); (M.A.A.)
- Department of Electrical and Computer Engineering, University of Waterloo, Ontario, ON N2L 3G1, Canada
| | | | - Mohammad Ali Akhaee
- School of Electrical and computer Engineering, College of Engineering, University of Tehran, Tehran 14174-66191, Iran; (M.D.R.); (B.M.); (M.A.A.)
| | - Farhang Jaryani
- Human Genome Sequencing Center, Baylor College of Medicine, Houston, TX 77030, USA;
| | - Samaneh Salehi Nasab
- Department of Computer Engineering, Lorestan University, Khorramabad 68151-44316, Iran;
| | - Myeounggon Lee
- College of Health Sciences, Dong-A University, Saha-gu, Busan 49315, Republic of Korea
| |
Collapse
|
207
|
Zhang D, Fan B, Lv L, Li D, Yang H, Jiang P, Jin F. Research hotspots and trends of artificial intelligence in rheumatoid arthritis: A bibliometric and visualized study. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:20405-20421. [PMID: 38124558 DOI: 10.3934/mbe.2023902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
Artificial intelligence (AI) applications on rheumatoid arthritis (RA) are becoming increasingly popular. In this bibliometric study, we aimed to analyze the characteristics of publications relevant to the research of AI in RA, thereby developing a thorough overview of this research topic. Web of Science was used to retrieve publications on the application of AI in RA from 2003 to 2022. Bibliometric analysis and visualization were performed using Microsoft Excel (2019), R software (4.2.2) and VOSviewer (1.6.18). The overall distribution of yearly outputs, leading countries, top institutions and authors, active journals, co-cited references and keywords were analyzed. A total of 859 relevant articles were identified in the Web of Science with an increasing trend. USA and China were the leading countries in this field, accounting for 71.59% of publications in total. Harvard University was the most influential institution. Arthritis Research & Therapy was the most active journal. Primary topics in this field focused on estimating the risk of developing RA, diagnosing RA using sensor, clinical, imaging and omics data, identifying the phenotype of RA patients using electronic health records, predicting treatment response, tracking the progression of the disease and predicting prognosis and developing new drugs. Machine learning and deep learning algorithms were the recent research hotspots and trends in this field. AI has potential applications in various fields of RA, including the risk assessment, screening, early diagnosis, monitoring, prognosis determination, achieving optimal therapeutic outcomes and new drug development for RA patients. Incorporating machine learning and deep learning algorithms into real-world clinical practice will be a future research hotspot and trend for AI in RA. Extensive collaboration to improve model maturity and robustness will be a critical step in the advancement of AI in healthcare.
Collapse
Affiliation(s)
- Di Zhang
- Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan 250011, China
| | - Bing Fan
- Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan 250011, China
| | - Liu Lv
- Dongzhimen Hospital, Beijing University of Chinese Medicine, Beijing 100700, China
| | - Da Li
- Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan 250011, China
| | - Huijun Yang
- Gansu Provincial Hospital of TCM, Lanzhou 730050, China
| | - Ping Jiang
- Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan 250011, China
| | - Fangmei Jin
- Gansu Provincial Hospital of TCM, Lanzhou 730050, China
| |
Collapse
|
208
|
Chen B, Jin J, Liu H, Yang Z, Zhu H, Wang Y, Lin J, Wang S, Chen S. Trends and hotspots in research on medical images with deep learning: a bibliometric analysis from 2013 to 2023. Front Artif Intell 2023; 6:1289669. [PMID: 38028662 PMCID: PMC10665961 DOI: 10.3389/frai.2023.1289669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 10/27/2023] [Indexed: 12/01/2023] Open
Abstract
Background With the rapid development of the internet, the improvement of computer capabilities, and the continuous advancement of algorithms, deep learning has developed rapidly in recent years and has been widely applied in many fields. Previous studies have shown that deep learning has an excellent performance in image processing, and deep learning-based medical image processing may help solve the difficulties faced by traditional medical image processing. This technology has attracted the attention of many scholars in the fields of computer science and medicine. This study mainly summarizes the knowledge structure of deep learning-based medical image processing research through bibliometric analysis and explores the research hotspots and possible development trends in this field. Methods Retrieve the Web of Science Core Collection database using the search terms "deep learning," "medical image processing," and their synonyms. Use CiteSpace for visual analysis of authors, institutions, countries, keywords, co-cited references, co-cited authors, and co-cited journals. Results The analysis was conducted on 562 highly cited papers retrieved from the database. The trend chart of the annual publication volume shows an upward trend. Pheng-Ann Heng, Hao Chen, and Klaus Hermann Maier-Hein are among the active authors in this field. Chinese Academy of Sciences has the highest number of publications, while the institution with the highest centrality is Stanford University. The United States has the highest number of publications, followed by China. The most frequent keyword is "Deep Learning," and the highest centrality keyword is "Algorithm." The most cited author is Kaiming He, and the author with the highest centrality is Yoshua Bengio. Conclusion The application of deep learning in medical image processing is becoming increasingly common, and there are many active authors, institutions, and countries in this field. Current research in medical image processing mainly focuses on deep learning, convolutional neural networks, classification, diagnosis, segmentation, image, algorithm, and artificial intelligence. The research focus and trends are gradually shifting toward more complex and systematic directions, and deep learning technology will continue to play an important role.
Collapse
Affiliation(s)
- Borui Chen
- First School of Clinical Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Jing Jin
- College of Rehabilitation Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Haichao Liu
- College of Rehabilitation Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Zhengyu Yang
- College of Rehabilitation Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Haoming Zhu
- College of Rehabilitation Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Yu Wang
- First School of Clinical Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Jianping Lin
- The School of Health, Fujian Medical University, Fuzhou, China
| | - Shizhong Wang
- The School of Health, Fujian Medical University, Fuzhou, China
| | - Shaoqing Chen
- College of Rehabilitation Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| |
Collapse
|
209
|
Kim JH, Choe AR, Park Y, Song EM, Byun JR, Cho MS, Yoo Y, Lee R, Kim JS, Ahn SH, Jung SA. Using a Deep Learning Model to Address Interobserver Variability in the Evaluation of Ulcerative Colitis (UC) Severity. J Pers Med 2023; 13:1584. [PMID: 38003899 PMCID: PMC10672717 DOI: 10.3390/jpm13111584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 11/02/2023] [Accepted: 11/04/2023] [Indexed: 11/26/2023] Open
Abstract
The use of endoscopic images for the accurate assessment of ulcerative colitis (UC) severity is crucial to determining appropriate treatment. However, experts may interpret these images differently, leading to inconsistent diagnoses. This study aims to address the issue by introducing a standardization method based on deep learning. We collected 254 rectal endoscopic images from 115 patients with UC, and five experts in endoscopic image interpretation assigned classification labels based on the Ulcerative Colitis Endoscopic Index of Severity (UCEIS) scoring system. Interobserver variance analysis of the five experts yielded an intraclass correlation coefficient of 0.8431 for UCEIS scores and a kappa coefficient of 0.4916 when the UCEIS scores were transformed into UC severity measures. To establish a consensus, we created a model that considered only the images and labels on which more than half of the experts agreed. This consensus model achieved an accuracy of 0.94 when tested with 50 images. Compared with models trained from individual expert labels, the consensus model demonstrated the most reliable prediction results.
Collapse
Affiliation(s)
- Jeong-Heon Kim
- Department of Medicine, Yonsei University College of Medicine, Seoul 03722, Republic of Korea; (J.-H.K.)
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul 03722, Republic of Korea
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul 03722, Republic of Korea
| | - A Reum Choe
- Department of Internal Medicine, Ewha Womans University College of Medicine, Seoul 03760, Republic of Korea; (A.R.C.); (Y.P.)
| | - Yehyun Park
- Department of Internal Medicine, Ewha Womans University College of Medicine, Seoul 03760, Republic of Korea; (A.R.C.); (Y.P.)
| | - Eun-Mi Song
- Department of Internal Medicine, Ewha Womans University College of Medicine, Seoul 03760, Republic of Korea; (A.R.C.); (Y.P.)
| | - Ju-Ran Byun
- Department of Internal Medicine, Ewha Womans University College of Medicine, Seoul 03760, Republic of Korea; (A.R.C.); (Y.P.)
| | - Min-Sun Cho
- Department of Pathology, Ewha Womans University College of Medicine, Seoul 03760, Republic of Korea (Y.Y.)
| | - Youngeun Yoo
- Department of Pathology, Ewha Womans University College of Medicine, Seoul 03760, Republic of Korea (Y.Y.)
| | - Rena Lee
- Department of Bioengineering, Ewha Womans University College of Medicine, Seoul 03760, Republic of Korea
| | - Jin-Sung Kim
- Department of Medicine, Yonsei University College of Medicine, Seoul 03722, Republic of Korea; (J.-H.K.)
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul 03722, Republic of Korea
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul 03722, Republic of Korea
| | - So-Hyun Ahn
- Ewha Medical Research Institute, Ewha Womans University College of Medicine, Seoul 03760, Republic of Korea
| | - Sung-Ae Jung
- Department of Internal Medicine, Ewha Womans University College of Medicine, Seoul 03760, Republic of Korea; (A.R.C.); (Y.P.)
| |
Collapse
|
210
|
Xiao H, Song W, Liu C, Peng B, Zhu M, Jiang B, Liu Z. Reconstruction of central arterial pressure waveform based on CBi-SAN network from radial pressure waveform. Artif Intell Med 2023; 145:102683. [PMID: 37925212 DOI: 10.1016/j.artmed.2023.102683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 05/30/2023] [Accepted: 10/06/2023] [Indexed: 11/06/2023]
Abstract
The central arterial pressure (CAP) is an important physiological indicator of the human cardiovascular system which represents one of the greatest threats to human health. Accurate non-invasive detection and reconstruction of CAP waveforms are crucial for the reliable treatment of cardiovascular system diseases. However, the traditional methods are reconstructed with relatively low accuracy, and some deep learning neural network models also have difficulty in extracting features, as a result, these methods have potential for further advancement. In this study, we proposed a novel model (CBi-SAN) to implement an end-to-end relationship from radial artery pressure (RAP) waveform to CAP waveform, which consisted of the convolutional neural network (CNN), the bidirectional long-short-time memory network (BiLSTM), and the self-attention mechanism to improve the performance of CAP reconstruction. The data on invasive measurements of CAP and RAP waveform were used in 62 patients before and after medication to develop and validate the performance of CBi-SAN model for reconstructing CAP waveform. We compared it with traditional methods and deep learning models in mean absolute error (MAE), root mean square error (RMSE), and Spearman correlation coefficient (SCC). Study results indicated the CBi-SAN model performed great performance on CAP waveform reconstruction (MAE: 2.23 ± 0.11 mmHg, RMSE: 2.21 ± 0.07 mmHg), concurrently, the best reconstruction effect was obtained in the central artery systolic pressure (CASP) and the central artery diastolic pressure(CADP) (RMSECASP: 2.94 ± 0.48 mmHg, RMSECADP: 1.96 ± 0.06 mmHg). These results implied the performance of the CAP reconstruction based on CBi-SAN model was superior to the existing methods, hopped to be effectively applied to clinical practice in the future.
Collapse
Affiliation(s)
- Hanguang Xiao
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China.
| | - Wangwang Song
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Chang Liu
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Bo Peng
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Mi Zhu
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Bin Jiang
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Zhi Liu
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China.
| |
Collapse
|
211
|
Zhang H, Zhang H, Zhang Y, Zhou B, Wu L, Lei Y, Huang B. Deep Learning Radiomics for the Assessment of Telomerase Reverse Transcriptase Promoter Mutation Status in Patients With Glioblastoma Using Multiparametric MRI. J Magn Reson Imaging 2023; 58:1441-1451. [PMID: 36896953 DOI: 10.1002/jmri.28671] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 02/21/2023] [Accepted: 02/23/2023] [Indexed: 03/11/2023] Open
Abstract
BACKGROUND Studies have shown that magnetic resonance imaging (MRI)-based deep learning radiomics (DLR) has the potential to assess glioma grade; however, its role in predicting telomerase reverse transcriptase (TERT) promoter mutation status in patients with glioblastoma (GBM) remains unclear. PURPOSE To evaluate the value of deep learning (DL) in multiparametric MRI-based radiomics in identifying TERT promoter mutations in patients with GBM preoperatively. STUDY TYPE Retrospective. POPULATION A total of 274 patients with isocitrate dehydrogenase-wildtype GBM were included in the study. The training and external validation cohorts included 156 (54.3 ± 12.7 years; 96 males) and 118 (54 .2 ± 13.4 years; 73 males) patients, respectively. FIELD STRENGTH/SEQUENCE Axial contrast-enhanced T1-weighted spin-echo inversion recovery sequence (T1CE), T1-weighted spin-echo inversion recovery sequence (T1WI), and T2-weighted spin-echo inversion recovery sequence (T2WI) on 1.5-T and 3.0-T scanners were used in this study. ASSESSMENT Overall tumor area regions (the tumor core and edema) were segmented, and the radiomics and DL features were extracted from preprocessed multiparameter preoperative brain MRI images-T1WI, T1CE, and T2WI. A model based on the DLR signature, clinical signature, and clinical DLR (CDLR) nomogram was developed and validated to identify TERT promoter mutation status. STATISTICAL TESTS The Mann-Whitney U test, Pearson test, least absolute shrinkage and selection operator, and logistic regression analysis were applied for feature selection and construction of radiomics and DL signatures. Results were considered statistically significant at P-value <0.05. RESULTS The DLR signature showed the best discriminative power for predicting TERT promoter mutations, yielding an AUC of 0.990 and 0.890 in the training and external validation cohorts, respectively. Furthermore, the DLR signature outperformed CDLR nomogram (P = 0.670) and significantly outperformed clinical models in the validation cohort. DATA CONCLUSION The multiparameter MRI-based DLR signature exhibited a promising performance for the assessment of TERT promoter mutations in patients with GBM, which could provide information for individualized treatment. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Hongbo Zhang
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Hanwen Zhang
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Health Science Center, Shenzhen Second People's Hospital, Shenzhen, China
| | - Yuze Zhang
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Beibei Zhou
- Department of Radiology, The Seventh Affiliated Hospital, Sun Yat-sen University, Shenzhen, China
| | - Lei Wu
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Yi Lei
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Health Science Center, Shenzhen Second People's Hospital, Shenzhen, China
| | - Biao Huang
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| |
Collapse
|
212
|
Elhadary M, Elshoeibi AM, Badr A, Elsayed B, Metwally O, Elshoeibi AM, Mattar M, Alfarsi K, AlShammari S, Alshurafa A, Yassin M. Revolutionizing chronic lymphocytic leukemia diagnosis: A deep dive into the diverse applications of machine learning. Blood Rev 2023; 62:101134. [PMID: 37758527 DOI: 10.1016/j.blre.2023.101134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Revised: 09/20/2023] [Accepted: 09/21/2023] [Indexed: 09/29/2023]
Abstract
Chronic lymphocytic leukemia (CLL) is a B cell neoplasm characterized by the accumulation of aberrant monoclonal B lymphocytes. CLL is the predominant type of leukemia in Western countries, accounting for 25% of cases. Although many patients remain asymptomatic, a subset may exhibit typical lymphoma symptoms, acquired immunodeficiency disorders, or autoimmune complications. Diagnosis involves blood tests showing increased lymphocytes and further examination using peripheral blood smear and flow cytometry to confirm the disease. With the significant advancements in machine learning (ML) and artificial intelligence (AI) in recent years, numerous models and algorithms have been proposed to support the diagnosis and classification of CLL. In this review, we discuss the benefits and drawbacks of recent applications of ML algorithms in the diagnosis and evaluation of patients diagnosed with CLL.
Collapse
Affiliation(s)
| | | | - Ahmed Badr
- College of Medicine, QU Health, Qatar University, Doha, Qatar
| | - Basel Elsayed
- College of Medicine, QU Health, Qatar University, Doha, Qatar
| | - Omar Metwally
- College of Medicine, QU Health, Qatar University, Doha, Qatar
| | | | - Mervat Mattar
- Internal Medicine and Clinical Hematology, Cairo University, Cairo, Egypt
| | - Khalil Alfarsi
- Department of Hematology, College of Medicine and Health Sciences, Sultan Qaboos University, Muscat, Oman
| | - Salem AlShammari
- Department of Medicine, Faculty of Medicine, Kuwait University, Kuwait, Kuwait
| | - Awni Alshurafa
- Hematology Section, Medical Oncology, National Center for Cancer Care and Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Mohamed Yassin
- Hematology Section, Medical Oncology, National Center for Cancer Care and Research (NCCCR), Hamad Medical Corporation, Doha, Qatar.
| |
Collapse
|
213
|
Huang X, Li Y, Yuan S, Wu X, Xu P, Zhou A. Shear wave elastography-based deep learning model for prognosis of patients with acutely decompensated cirrhosis. JOURNAL OF CLINICAL ULTRASOUND : JCU 2023; 51:1568-1578. [PMID: 37883118 DOI: 10.1002/jcu.23577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Revised: 09/16/2023] [Accepted: 09/21/2023] [Indexed: 10/27/2023]
Abstract
PURPOSE This study aimed to develop and validate a deep learning model based on two-dimensional (2D) shear wave elastography (SWE) for predicting prognosis in patients with acutely decompensated cirrhosis. METHODS We prospectively enrolled 288 acutely decompensated cirrhosis patients with a minimum 1-year follow-up, divided into a training cohort (202 patients, 1010 2D SWE images) and a test cohort (86 patients, 430 2D SWE images). Using transfer learning by Resnet-50 to analyze 2D SWE images, a SWE-based deep learning signature (DLswe) was developed for 1-year mortality prediction. A combined nomogram was established by incorporating deep learning SWE information and laboratory data through a multivariate Cox regression analysis. The performance of the nomogram was evaluated with respect to predictive discrimination, calibration, and clinical usefulness in the training and test cohorts. RESULTS The C-index for DLswe was 0.748 (95% CI 0.666-0.829) and 0.744 (95% CI 0.623-0.864) in the training and test cohorts, respectively. The combined nomogram significantly improved the C-index, accuracy, sensitivity, and specificity of DLswe to 0.823 (95% CI 0.763-0.883), 86%, 75%, and 89% in the training cohort, and 0.808 (95% CI 0.707-0.909), 83%, 74%, and 85% in the test cohort (both p < 0.05). Calibration curves demonstrated good calibration of the combined nomogram. Decision curve analysis indicated that the nomogram was clinically valuable. CONCLUSIONS The 2D SWE-based deep learning model holds promise as a noninvasive tool to capture valuable prognostic information, thereby improving outcome prediction in patients with acutely decompensated cirrhosis.
Collapse
Affiliation(s)
- Xingzhi Huang
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Yaohui Li
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Songsong Yuan
- Department of Infectious Disease, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Xiaoping Wu
- Department of Infectious Disease, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Pan Xu
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Aiyun Zhou
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, China
| |
Collapse
|
214
|
Takahashi D, Fujimoto S, Nozaki YO, Kudo A, Kawaguchi YO, Takamura K, Hiki M, Sato E, Tomizawa N, Daida H, Minamino T. Fully automated coronary artery calcium quantification on electrocardiogram-gated non-contrast cardiac computed tomography using deep-learning with novel Heart-labelling method. EUROPEAN HEART JOURNAL OPEN 2023; 3:oead113. [PMID: 38035036 PMCID: PMC10683040 DOI: 10.1093/ehjopen/oead113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 09/14/2023] [Accepted: 10/26/2023] [Indexed: 12/02/2023]
Abstract
Aims To develop an artificial intelligence (AI)-model which enables fully automated accurate quantification of coronary artery calcium (CAC), using deep learning (DL) on electrocardiogram (ECG)-gated non-contrast cardiac computed tomography (gated CCT) images. Methods and results Retrospectively, 560 gated CCT images (including 60 synthetic images) performed at our institution were used to train AI-model, which can automatically divide heart region into five areas belonging to left main (LM), left anterior descending (LAD), circumflex (LCX), right coronary artery (RCA), and another. Total and vessel-specific CAC score (CACS) in each scan were manually evaluated. AI-model was trained with novel Heart-labelling method via DL according to the manual-derived results. Then, another 409 gated CCT images obtained in our institution were used for model validation. The performance of present AI-model was tested using another external cohort of 400 gated CCT images of Stanford Center for Artificial Intelligence of Medical Imaging by comparing with the ground truth. The overall accuracy of the AI-model for total CACS classification was excellent with Cohen's kappa of k = 0.89 and 0.95 (validation and test, respectively), which surpasses previous research of k = 0.89. Bland-Altman analysis showed little difference in individual total and vessel-specific CACS between AI-derived CACS and ground truth in test cohort (mean difference [95% confidence interval] were 1.5 [-42.6, 45.6], -1.5 [-100.5, 97.5], 6.6 [-60.2, 73.5], 0.96 [-59.2, 61.1], and 7.6 [-134.1, 149.2] for LM, LAD, LCX, RCA, and total CACS, respectively). Conclusion Present Heart-labelling method provides a further improvement in fully automated, total, and vessel-specific CAC quantification on gated CCT.
Collapse
Affiliation(s)
- Daigo Takahashi
- Department of Cardiovascular Biology and Medicine, Juntendo University Graduate School of Medicine, 2-1-1 Hongo Bunkyo-ku, Tokyo 113-8421, Japan
| | - Shinichiro Fujimoto
- Department of Cardiovascular Biology and Medicine, Juntendo University Graduate School of Medicine, 2-1-1 Hongo Bunkyo-ku, Tokyo 113-8421, Japan
| | - Yui O Nozaki
- Department of Cardiovascular Biology and Medicine, Juntendo University Graduate School of Medicine, 2-1-1 Hongo Bunkyo-ku, Tokyo 113-8421, Japan
| | - Ayako Kudo
- Department of Cardiovascular Biology and Medicine, Juntendo University Graduate School of Medicine, 2-1-1 Hongo Bunkyo-ku, Tokyo 113-8421, Japan
| | - Yuko O Kawaguchi
- Department of Cardiovascular Biology and Medicine, Juntendo University Graduate School of Medicine, 2-1-1 Hongo Bunkyo-ku, Tokyo 113-8421, Japan
| | - Kazuhisa Takamura
- Department of Cardiovascular Biology and Medicine, Juntendo University Graduate School of Medicine, 2-1-1 Hongo Bunkyo-ku, Tokyo 113-8421, Japan
| | - Makoto Hiki
- Department of Cardiovascular Biology and Medicine, Juntendo University Graduate School of Medicine, 2-1-1 Hongo Bunkyo-ku, Tokyo 113-8421, Japan
| | - Eisuke Sato
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, 2-1-1 Hongo Bunkyo-ku, Tokyo 113-8421, Japan
| | - Nobuo Tomizawa
- Department of Radiology, Juntendo University Graduate School of Medicine, 2-1-1 Hongo Bunkyo-ku, Tokyo 113-8421, Japan
| | - Hiroyuki Daida
- Department of Cardiovascular Biology and Medicine, Juntendo University Graduate School of Medicine, 2-1-1 Hongo Bunkyo-ku, Tokyo 113-8421, Japan
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, 2-1-1 Hongo Bunkyo-ku, Tokyo 113-8421, Japan
| | - Tohru Minamino
- Department of Cardiovascular Biology and Medicine, Juntendo University Graduate School of Medicine, 2-1-1 Hongo Bunkyo-ku, Tokyo 113-8421, Japan
| |
Collapse
|
215
|
Mohit K, Shukla A, Gupta R, Singh PK, Agarwal K, Kumar B. Contrastive Learning Embedded Siamese Neural Network for the Assessment of Fatty Liver. TENCON 2023 - 2023 IEEE REGION 10 CONFERENCE (TENCON) 2023:1261-1265. [DOI: 10.1109/tencon58879.2023.10322413] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
Affiliation(s)
- Kumar Mohit
- MNNIT Allahabad,Department of Electronics and Communication Engineering,Prayagraj,INDIA
| | - Ankit Shukla
- MNNIT Allahabad,Department of Electronics and Communication Engineering,Prayagraj,INDIA
| | - Rajeev Gupta
- MNNIT Allahabad,Department of Electronics and Communication Engineering,Prayagraj,INDIA
| | | | | | - Basant Kumar
- MNNIT Allahabad,Department of Electronics and Communication Engineering,Prayagraj,INDIA
| |
Collapse
|
216
|
Kaneda Y. In the era of prominent AI, what role will physicians be expected to play? QJM 2023; 116:881. [PMID: 37216897 DOI: 10.1093/qjmed/hcad099] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Indexed: 05/24/2023] Open
Affiliation(s)
- Yudai Kaneda
- School of Medicine, Hokkaido University, Kita-ku, Kita 15, Nishi 7, Sapporo, Hokkaido 0608638, Japan
| |
Collapse
|
217
|
Livieris IE, Pintelas E, Kiriakidou N, Pintelas P. Explainable Image Similarity: Integrating Siamese Networks and Grad-CAM. J Imaging 2023; 9:224. [PMID: 37888331 PMCID: PMC10606999 DOI: 10.3390/jimaging9100224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 10/03/2023] [Accepted: 10/12/2023] [Indexed: 10/28/2023] Open
Abstract
With the proliferation of image-based applications in various domains, the need for accurate and interpretable image similarity measures has become increasingly critical. Existing image similarity models often lack transparency, making it challenging to understand the reasons why two images are considered similar. In this paper, we propose the concept of explainable image similarity, where the goal is the development of an approach, which is capable of providing similarity scores along with visual factual and counterfactual explanations. Along this line, we present a new framework, which integrates Siamese Networks and Grad-CAM for providing explainable image similarity and discuss the potential benefits and challenges of adopting this approach. In addition, we provide a comprehensive discussion about factual and counterfactual explanations provided by the proposed framework for assisting decision making. The proposed approach has the potential to enhance the interpretability, trustworthiness and user acceptance of image-based systems in real-world image similarity applications.
Collapse
Affiliation(s)
- Ioannis E. Livieris
- Department of Statistics & Insurance, University of Piraeus, GR 185-34 Piraeus, Greece
| | - Emmanuel Pintelas
- Department of Mathematics, University of Patras, GR 265-00 Patras, Greece; (E.P.); (P.P.)
| | - Niki Kiriakidou
- Department of Informatics and Telematics, Harokopio University of Athens, GR 177-78 Athens, Greece;
| | - Panagiotis Pintelas
- Department of Mathematics, University of Patras, GR 265-00 Patras, Greece; (E.P.); (P.P.)
| |
Collapse
|
218
|
Linfeng W, Yong L, Jiayao L, Yunsheng W, Shipu X. Based on the multi-scale information sharing network of fine-grained attention for agricultural pest detection. PLoS One 2023; 18:e0286732. [PMID: 37796844 PMCID: PMC10553313 DOI: 10.1371/journal.pone.0286732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 05/22/2023] [Indexed: 10/07/2023] Open
Abstract
It is of great significance to identify the pest species accurately and control it effectively to reduce the loss of agricultural products. The research results of this project will provide theoretical basis for preventing and controlling the spread of pests and reducing the loss of agricultural products, and have important practical significance for improving the quality of agricultural products and increasing the output of agricultural products. At the same time, it provides a kind of effective prevention and control measures for farmers, so as to ensure the safety and health of crops. Because of the slow speed and high cost of manual identification, it is necessary to establish a set of automatic pest identification system. The traditional image-based insect classifier is mainly realized by machine vision technology, but because of its high complexity, the classification efficiency is low and it is difficult to meet the needs of applications. Therefore, it is necessary to develop a new automatic insect recognition system to improve the accuracy of insect classification. There are many species and forms of insects, and the field living environment is complex. The morphological similarity between species is high, which brings difficulties to the classification of insects. In recent years, with the rapid development of deep learning technology, using artificial neural network to classify pests is an important method to establish a fast and accurate classification model. In this work, we propose a novel convolutional neural network-based model (MSSN), which includes attention mechanism, feature pyramid, and fine-grained model. The model has good scalability, can better capture the semantic information in the image, and achieve more accurate classification. We evaluated our approach on a common data set: large-scale pest data set, PlantVillage benchmark data set, and evaluated model performance using a variety of evaluation indicators, namely, macro mean accuracy (MPre), macro mean recall rate (MRec), macro mean F1-score (MF1), Accuracy (Acc) and geometric mean (GM). Experimental results show that the proposed algorithm has better performance and universality ability than the existing algorithm. For example, on the data set, the maximum accuracy we obtained was 86.35%, which exceeded the corresponding technical level. The ablation experiment was conducted on the experiment itself, and the comprehensive evaluation of the complete MSSN(scale 1+2+3) was the best in various performance indexes, demonstrating the feasibility of the innovative method in this paper.
Collapse
Affiliation(s)
- Wang Linfeng
- Institute of Agricultural Information Science and Technology, Shanghai Academy of Agricultural Sciences, Shanghai, China
| | - Liu Yong
- Institute of Agricultural Information Science and Technology, Shanghai Academy of Agricultural Sciences, Shanghai, China
| | - Liu Jiayao
- Institute of Agricultural Information Science and Technology, Shanghai Academy of Agricultural Sciences, Shanghai, China
| | - Wang Yunsheng
- Institute of Agricultural Information Science and Technology, Shanghai Academy of Agricultural Sciences, Shanghai, China
| | - Xu Shipu
- Institute of Agricultural Information Science and Technology, Shanghai Academy of Agricultural Sciences, Shanghai, China
| |
Collapse
|
219
|
Chen X, Liu C. Deep-learning-based methods of attenuation correction for SPECT and PET. J Nucl Cardiol 2023; 30:1859-1878. [PMID: 35680755 DOI: 10.1007/s12350-022-03007-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 05/02/2022] [Indexed: 10/18/2022]
Abstract
Attenuation correction (AC) is essential for quantitative analysis and clinical diagnosis of single-photon emission computed tomography (SPECT) and positron emission tomography (PET). In clinical practice, computed tomography (CT) is utilized to generate attenuation maps (μ-maps) for AC of hybrid SPECT/CT and PET/CT scanners. However, CT-based AC methods frequently produce artifacts due to CT artifacts and misregistration of SPECT-CT and PET-CT scans. Segmentation-based AC methods using magnetic resonance imaging (MRI) for PET/MRI scanners are inaccurate and complicated since MRI does not contain direct information of photon attenuation. Computational AC methods for SPECT and PET estimate attenuation coefficients directly from raw emission data, but suffer from low accuracy, cross-talk artifacts, high computational complexity, and high noise level. The recently evolving deep-learning-based methods have shown promising results in AC of SPECT and PET, which can be generally divided into two categories: indirect and direct strategies. Indirect AC strategies apply neural networks to transform emission, transmission, or MR images into synthetic μ-maps or CT images which are then incorporated into AC reconstruction. Direct AC strategies skip the intermediate steps of generating μ-maps or CT images and predict AC SPECT or PET images from non-attenuation-correction (NAC) SPECT or PET images directly. These deep-learning-based AC methods show comparable and even superior performance to non-deep-learning methods. In this article, we first discussed the principles and limitations of non-deep-learning AC methods, and then reviewed the status and prospects of deep-learning-based methods for AC of SPECT and PET.
Collapse
Affiliation(s)
- Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT, 06520, USA.
| |
Collapse
|
220
|
Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. Nuklearmedizin 2023; 62:306-313. [PMID: 37802058 DOI: 10.1055/a-2157-6670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
BACKGROUND Machine learning (ML) is considered an important technology for future data analysis in health care. METHODS The inherently technology-driven fields of diagnostic radiology and nuclear medicine will both benefit from ML in terms of image acquisition and reconstruction. Within the next few years, this will lead to accelerated image acquisition, improved image quality, a reduction of motion artifacts and - for PET imaging - reduced radiation exposure and new approaches for attenuation correction. Furthermore, ML has the potential to support decision making by a combined analysis of data derived from different modalities, especially in oncology. In this context, we see great potential for ML in multiparametric hybrid imaging and the development of imaging biomarkers. RESULTS AND CONCLUSION In this review, we will describe the basics of ML, present approaches in hybrid imaging of MRI, CT, and PET, and discuss the specific challenges associated with it and the steps ahead to make ML a diagnostic and clinical tool in the future. KEY POINTS · ML provides a viable clinical solution for the reconstruction, processing, and analysis of hybrid imaging obtained from MRI, CT, and PET..
Collapse
Affiliation(s)
- Thomas Küstner
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Tobias Hepp
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Ferdinand Seith
- Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| |
Collapse
|
221
|
Huang X, Chen X, Zhong X, Tian T. The CNN model aided the study of the clinical value hidden in the implant images. J Appl Clin Med Phys 2023; 24:e14141. [PMID: 37656066 PMCID: PMC10562019 DOI: 10.1002/acm2.14141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 08/14/2023] [Accepted: 08/16/2023] [Indexed: 09/02/2023] Open
Abstract
PURPOSE This article aims to construct a new method to evaluate radiographic image identification results based on artificial intelligence, which can complement the limited vision of researchers when studying the effect of various factors on clinical implantation outcomes. METHODS We constructed a convolutional neural network (CNN) model using the clinical implant radiographic images. Moreover, we used gradient-weighted class activation mapping (Grad-CAM) to obtain thermal maps to present identification differences before performing statistical analyses. Subsequently, to verify whether these differences presented by the Grad-CAM algorithm would be of value to clinical practices, we measured the bone thickness around the identified sites. Finally, we analyzed the influence of the implant type on the implantation according to the measurement results. RESULTS The thermal maps showed that the sites with significant differences between Straumann BL and Bicon implants as identified by the CNN model were mainly the thread and neck area. (2) The heights of the mesial, distal, buccal, and lingual bone of the Bicon implant post-op were greater than those of Straumann BL (P < 0.05). (3) Between the first and second stages of surgery, the amount of bone thickness variation at the buccal and lingual sides of the Bicon implant platform was greater than that of the Straumann BL implant (P < 0.05). CONCLUSION According to the results of this study, we found that the identified-neck-area of the Bicon implant was placed deeper than the Straumann BL implant, and there was more bone resorption on the buccal and lingual sides at the Bicon implant platform between the first and second stages of surgery. In summary, this study proves that using the CNN classification model can identify differences that complement our limited vision.
Collapse
Affiliation(s)
- Xinxu Huang
- State Key Laboratory of Oral DiseasesNational Clinical Research Center for Oral DiseasesWest China Hospital of StomatologySichuan UniversityChengduChina
| | - Xingyu Chen
- State Key Laboratory of Oral DiseasesNational Clinical Research Center for Oral DiseasesWest China Hospital of StomatologySichuan UniversityChengduChina
| | - Xinnan Zhong
- State Key Laboratory of Oral DiseasesNational Clinical Research Center for Oral DiseasesWest China Hospital of StomatologySichuan UniversityChengduChina
| | - Taoran Tian
- State Key Laboratory of Oral DiseasesNational Clinical Research Center for Oral DiseasesWest China Hospital of StomatologySichuan UniversityChengduChina
| |
Collapse
|
222
|
Lu X, Liu X, Xiao Z, Zhang S, Huang J, Yang C, Liu S. Self-supervised dual-head attentional bootstrap learning network for prostate cancer screening in transrectal ultrasound images. Comput Biol Med 2023; 165:107337. [PMID: 37672927 DOI: 10.1016/j.compbiomed.2023.107337] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 07/13/2023] [Accepted: 08/07/2023] [Indexed: 09/08/2023]
Abstract
Current convolutional neural network-based ultrasound automatic classification models for prostate cancer often rely on extensive manual labeling. Although Self-supervised Learning (SSL) have shown promise in addressing this problem, those data that from medical scenarios contains intra-class similarity conflicts, so using loss calculations directly that include positive and negative sample pairs can mislead training. SSL method tends to focus on global consistency at the image level and does not consider the internal informative relationships of the feature map. To improve the efficiency of prostate cancer diagnosis, using SSL method to learn key diagnostic information in ultrasound images, we proposed a self-supervised dual-head attentional bootstrap learning network (SDABL), including Online-Net and Target-Net. Self-Position Attention Module (SPAM) and adaptive maximum channel attention module (CAAM) are inserted in both paths simultaneously. They captures position and inter-channel attention and of the original feature map with a small number of parameters, solve the information optimization problem of feature maps in SSL. In loss calculations, we discard the construction of negative sample pairs, and instead guide the network to learn the consistency of the location space and channel space by drawing closer to the embedding representation of positive samples continuously. We conducted numerous experiments on the prostate Transrectal ultrasound (TRUS) dataset, experiments show that our SDABL pre-training method has significant advantages over both mainstream contrast learning methods and other attention-based methods. Specifically, the SDABL pre-trained backbone achieves 80.46% accuracy on our TRUS dataset after fine-tuning.
Collapse
Affiliation(s)
- Xu Lu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China; Guangdong Provincial Key Laboratory of Intellectual Property & Big Data, Guangzhou 510665, China; Pazhou Lab, Guangzhou 510330, China
| | - Xiangjun Liu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Zhiwei Xiao
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Shulian Zhang
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Jun Huang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China.
| | - Chuan Yang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China.
| | - Shaopeng Liu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China.
| |
Collapse
|
223
|
Suman S, Tiwari AK, Singh K. Computer-aided diagnostic system for hypertensive retinopathy: A review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107627. [PMID: 37320942 DOI: 10.1016/j.cmpb.2023.107627] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 05/03/2023] [Accepted: 05/27/2023] [Indexed: 06/17/2023]
Abstract
Hypertensive Retinopathy (HR) is a retinal disease caused by elevated blood pressure for a prolonged period. There are no obvious signs in the early stages of high blood pressure, but it affects various body parts over time, including the eyes. HR is a biomarker for several illnesses, including retinal diseases, atherosclerosis, strokes, kidney disease, and cardiovascular risks. Early microcirculation abnormalities in chronic diseases can be diagnosed through retinal examination prior to the onset of major clinical consequences. Computer-aided diagnosis (CAD) plays a vital role in the early identification of HR with improved diagnostic accuracy, which is time-efficient and demands fewer resources. Recently, numerous studies have been reported on the automatic identification of HR. This paper provides a comprehensive review of the automated tasks of Artery-Vein (A/V) classification, Arteriovenous ratio (AVR) computation, HR detection (Binary classification), and HR severity grading. The review is conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol. The paper discusses the clinical features of HR, the availability of datasets, existing methods used for A/V classification, AVR computation, HR detection, and severity grading, and performance evaluation metrics. The reviewed articles are summarized with classifiers details, adoption of different kinds of methodologies, performance comparisons, datasets details, their pros and cons, and computational platform. For each task, a summary and critical in-depth analysis are provided, as well as common research issues and challenges in the existing studies. Finally, the paper proposes future research directions to overcome challenges associated with data set availability, HR detection, and severity grading.
Collapse
Affiliation(s)
- Supriya Suman
- Interdisciplinary Research Platform (IDRP): Smart Healthcare, Indian Institute of Technology, N.H. 62, Nagaur Road, Karwar, Jodhpur, Rajasthan 342030, India.
| | - Anil Kumar Tiwari
- Department of Electrical Engineering, Indian Institute of Technology, N.H. 62, Nagaur Road, Karwar, Jodhpur, Rajasthan 342030, India
| | - Kuldeep Singh
- Department of Pediatrics, All India Institute of Medical Sciences, Basni Industrial Area Phase-2, Jodhpur, Rajasthan 342005, India
| |
Collapse
|
224
|
Siddiqui EA, Chaurasia V, Shandilya M. Classification of lung cancer computed tomography images using a 3-dimensional deep convolutional neural network with multi-layer filter. J Cancer Res Clin Oncol 2023; 149:11279-11294. [PMID: 37368121 DOI: 10.1007/s00432-023-04992-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 06/15/2023] [Indexed: 06/28/2023]
Abstract
Lung cancer creates pulmonary nodules in the patient's lung, which may be diagnosed early on using computer-aided diagnostics. A novel automated pulmonary nodule diagnosis technique using three-dimensional deep convolutional neural networks and multi-layered filter has been presented in this paper. For the suggested automated diagnosis of lung nodule, volumetric computed tomographic images are employed. The proposed approach generates three-dimensional feature layers, which retain the temporal links between adjacent slices of computed tomographic images. The use of several activation functions at different levels of the proposed network results in increased feature extraction and efficient classification. The suggested approach divides lung volumetric computed tomography pictures into malignant and benign categories. The suggested technique's performance is evaluated using three commonly used datasets in the domain: LUNA 16, LIDC-IDRI, and TCIA. The proposed method outperforms the state-of-the-art in terms of accuracy, sensitivity, specificity, F-1 score, false-positive rate, false-negative rate, and error rate.
Collapse
Affiliation(s)
| | | | - Madhu Shandilya
- Maulana Azad National Institute of Technology, Bhopal, 462003, India
| |
Collapse
|
225
|
Liu W, Wang W, Zhang H, Guo M, Xu Y, Liu X. Development and Validation of Multi-Omics Thymoma Risk Classification Model Based on Transfer Learning. J Digit Imaging 2023; 36:2015-2024. [PMID: 37268842 PMCID: PMC10501978 DOI: 10.1007/s10278-023-00855-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 05/17/2023] [Accepted: 05/19/2023] [Indexed: 06/04/2023] Open
Abstract
The paper aims to develop prediction model that integrates clinical, radiomics, and deep features using transfer learning to stratifying between high and low risk of thymoma. Our study enrolled 150 patients with thymoma (76 low-risk and 74 high-risk) who underwent surgical resection and pathologically confirmed in Shengjing Hospital of China Medical University from January 2018 to December 2020. The training cohort consisted of 120 patients (80%) and the test cohort consisted of 30 patients (20%). The 2590 radiomics and 192 deep features from non-enhanced, arterial, and venous phase CT images were extracted and ANOVA, Pearson correlation coefficient, PCA, and LASSO were used to select the most significant features. A fusion model that integrated clinical, radiomics, and deep features was developed with SVM classifiers to predict the risk level of thymoma, and accuracy, sensitivity, specificity, ROC curves, and AUC were applied to evaluate the classification model. In both the training and test cohorts, the fusion model demonstrated better performance in stratifying high and low risk of thymoma. It had AUCs of 0.99 and 0.95, and an accuracy of 0.93 and 0.83, respectively. This was compared to the clinical model (AUCs of 0.70 and 0.51, accuracy of 0.68 and 0.47), the radiomics model (AUCs of 0.97 and 0.82, accuracy of 0.93 and 0.80), and the deep model (AUCs of 0.94 and 0.85, accuracy of 0.88 and 0.80). The fusion model integrating clinical, radiomics and deep features based on transfer learning was efficient for noninvasively stratifying high risk and low risk of thymoma. The models could help to determine surgery strategy for thymoma cancer.
Collapse
Affiliation(s)
- Wei Liu
- School of Health Management, China Medical University, Shenyang, China
| | - Wei Wang
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Hanyi Zhang
- Department of Radiology, Liaoning Cancer Hospital and Institute, Shenyang, China
| | - Miaoran Guo
- Department of Radiology, The First Affiliated Hospital of China Medical University, Shenyang, China
| | - Yingxin Xu
- School of Health Management, China Medical University, Shenyang, China
| | - Xiaoqi Liu
- School of Health Management, China Medical University, Shenyang, China
| |
Collapse
|
226
|
Ren W, Zhu Y, Wang Q, Song Y, Fan Z, Bai Y, Lin D. Deep learning prediction model for central lymph node metastasis in papillary thyroid microcarcinoma based on cytology. Cancer Sci 2023; 114:4114-4124. [PMID: 37574759 PMCID: PMC10551586 DOI: 10.1111/cas.15930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 07/11/2023] [Accepted: 08/01/2023] [Indexed: 08/15/2023] Open
Abstract
Controversy exists regarding whether patients with low-risk papillary thyroid microcarcinoma (PTMC) should undergo surgery or active surveillance; the inaccuracy of the preoperative clinical lymph node status assessment is one of the primary factors contributing to the controversy. It is imperative to accurately predict the lymph node status of PTMC before surgery. We selected 208 preoperative fine-needle aspiration (FNA) liquid-based preparations of PTMC as our research objects; all of these instances underwent lymph node dissection and, aside from lymph node status, were consistent with low-risk PTMC. We separated them into two groups according to whether the postoperative pathology showed central lymph node metastases. The deep learning model was expected to predict, based on the preoperative thyroid FNA liquid-based preparation, whether PTMC was accompanied by central lymph node metastases. Our deep learning model attained a sensitivity, specificity, positive prediction value (PPV), negative prediction value (NPV), and accuracy of 78.9% (15/19), 73.9% (17/23), 71.4% (15/21), 81.0% (17/21), and 76.2% (32/42), respectively. The area under the receiver operating characteristic curve (value was 0.8503. The predictive performance of the deep learning model was superior to that of the traditional clinical evaluation, and further analysis revealed the cell morphologies that played key roles in model prediction. Our study suggests that the deep learning model based on preoperative thyroid FNA liquid-based preparation is a reliable strategy for predicting central lymph node metastases in thyroid micropapillary carcinoma, and its performance surpasses that of traditional clinical examination.
Collapse
Affiliation(s)
- Wenhao Ren
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of PathologyPeking University Cancer Hospital and InstituteBeijingChina
| | - Yanli Zhu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of PathologyPeking University Cancer Hospital and InstituteBeijingChina
| | - Qian Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of PathologyPeking University Cancer Hospital and InstituteBeijingChina
| | - Yuntao Song
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Head and Neck SurgeryPeking University Cancer Hospital and InstituteBeijingChina
| | - Zhihui Fan
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of UltrasoundPeking University Cancer Hospital and InstituteBeijingChina
| | - Yanhua Bai
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of PathologyPeking University Cancer Hospital and InstituteBeijingChina
| | - Dongmei Lin
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of PathologyPeking University Cancer Hospital and InstituteBeijingChina
| |
Collapse
|
227
|
Pan C, Lian L, Chen J, Huang R. FemurTumorNet: Bone tumor classification in the proximal femur using DenseNet model based on radiographs. J Bone Oncol 2023; 42:100504. [PMID: 37766930 PMCID: PMC10520341 DOI: 10.1016/j.jbo.2023.100504] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 08/31/2023] [Accepted: 09/03/2023] [Indexed: 09/29/2023] Open
Abstract
Background & purpose For the best possible outcomes from therapy, proximal femur bone cancers must be accurately classified. This work creates an artificial intelligence (AI) model based on plain radiographs to categorize bone tumor in the proximal femur. Materials and methods A tertiary referral center's standard anteroposterior hip radiographs were employed. A dataset 538 images of the femur, including malignant, benign, and tumor-free cases, was employed for training the AI model. There is a total of 214 images showing bone tumor. Pre-processing techniques were applied, and DenseNet model utilized for classification. The performance of the DenseNet model was compared to that of human doctors using cross-validation, further enhanced by incorporating Grad-CAM to visually indicate tumor locations. Results For the three-label classification job, the suggested method boasts an excellent area under the receiver operating characteristic (AUROC) of 0.953. It scored much higher (0.853) than the diagnosis accuracy of the human experts in manual classification (0.794). The AI model outperformed the mean values of the clinicians in terms of sensitivity, specificity, accuracy, and F1 scores. Conclusion The developed DenseNet model demonstrated remarkable accuracy in classifying bone tumors in the proximal femur using plain radiographs. This technology has the potential to reduce misdiagnosis, particularly among non-specialists in musculoskeletal oncology. The utilization of advanced deep learning models provides a promising approach for improved classification and enhanced clinical decision-making in bone tumor detection.
Collapse
Affiliation(s)
- Canyu Pan
- Department of Radiology, Quanzhou First Hospital Affiliated to Fujian Medical University, Quanzhou 362000, Fujian Province, China
| | - Luoyu Lian
- Department of Thoracic Surgery, Quanzhou First Hospital Affiliated to Fujian Medical, University, Quanzhou 362000, Fujian Province, China
| | - Jieyun Chen
- Department of Radiology, Quanzhou First Hospital Affiliated to Fujian Medical University, Quanzhou 362000, Fujian Province, China
| | - Risheng Huang
- Department of Radiology, Quanzhou First Hospital Affiliated to Fujian Medical University, Quanzhou 362000, Fujian Province, China
| |
Collapse
|
228
|
Hu D, Li X, Lin C, Wu Y, Jiang H. Deep Learning to Predict the Cell Proliferation and Prognosis of Non-Small Cell Lung Cancer Based on FDG-PET/CT Images. Diagnostics (Basel) 2023; 13:3107. [PMID: 37835850 PMCID: PMC10573026 DOI: 10.3390/diagnostics13193107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 09/15/2023] [Accepted: 09/29/2023] [Indexed: 10/15/2023] Open
Abstract
(1) Background: Cell proliferation (Ki-67) has important clinical value in the treatment and prognosis of non-small cell lung cancer (NSCLC). However, current detection methods for Ki-67 are invasive and can lead to incorrect results. This study aimed to explore a deep learning classification model for the prediction of Ki-67 and the prognosis of NSCLC based on FDG-PET/CT images. (2) Methods: The FDG-PET/CT scan results of 159 patients with NSCLC confirmed via pathology were analyzed retrospectively, and the prediction models for the Ki-67 expression level based on PET images, CT images and PET/CT combined images were constructed using Densenet201. Based on a Ki-67 high expression score (HES) obtained from the prediction model, the survival rate of patients with NSCLC was analyzed using Kaplan-Meier and univariate Cox regression. (3) Results: The statistical analysis showed that Ki-67 expression was significantly correlated with clinical features of NSCLC, including age, gender, differentiation state and histopathological type. After a comparison of the three models (i.e., the PET model, the CT model, and the FDG-PET/CT combined model), the combined model was found to have the greatest advantage in Ki-67 prediction in terms of AUC (0.891), accuracy (0.822), precision (0.776) and specificity (0.902). Meanwhile, our results indicated that HES was a risk factor for prognosis and could be used for the survival prediction of NSCLC patients. (4) Conclusions: The deep-learning-based FDG-PET/CT radiomics classifier provided a novel non-invasive strategy with which to evaluate the malignancy and prognosis of NSCLC.
Collapse
Affiliation(s)
- Dehua Hu
- Department of Biomedical Informatics, School of Life Sciences, Central South University, Changsha 410013, China
| | - Xiang Li
- Department of Biomedical Informatics, School of Life Sciences, Central South University, Changsha 410013, China
| | - Chao Lin
- Department of Biomedical Informatics, School of Life Sciences, Central South University, Changsha 410013, China
| | - Yonggang Wu
- Department of Nuclear Medicine & PET Imaging Center, The Second Xiangya Hospital of Central South University, Changsha 410011, China
| | - Hao Jiang
- Department of Biomedical Informatics, School of Life Sciences, Central South University, Changsha 410013, China
| |
Collapse
|
229
|
Zhang J, Tan X, Chen W, Du G, Fu Q, Zhang H, Jiang H. EFF_D_SVM: a robust multi-type brain tumor classification system. Front Neurosci 2023; 17:1269100. [PMID: 37841686 PMCID: PMC10570803 DOI: 10.3389/fnins.2023.1269100] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Accepted: 08/29/2023] [Indexed: 10/17/2023] Open
Abstract
Brain tumors are one of the most threatening diseases to human health. Accurate identification of the type of brain tumor is essential for patients and doctors. An automated brain tumor diagnosis system based on Magnetic Resonance Imaging (MRI) can help doctors to identify the type of tumor and reduce their workload, so it is vital to improve the performance of such systems. Due to the challenge of collecting sufficient data on brain tumors, utilizing pre-trained Convolutional Neural Network (CNN) models for brain tumors classification is a feasible approach. The study proposes a novel brain tumor classification system, called EFF_D_SVM, which is developed on the basic of pre-trained EfficientNetB0 model. Firstly, a new feature extraction module EFF_D was proposed, in which the classification layer of EfficientNetB0 was replaced with two dropout layers and two dense layers. Secondly, the EFF_D model was fine-tuned using Softmax, and then features of brain tumor images were extracted using the fine-tuned EFF_D. Finally, the features were classified using Support Vector Machine (SVM). In order to verify the effectiveness of the proposed brain tumor classification system, a series of comparative experiments were carried out. Moreover, to understand the extracted features of the brain tumor images, Grad-CAM technology was used to visualize the proposed model. Furthermore, cross-validation was conducted to verify the robustness of the proposed model. The evaluation metrics including accuracy, F1-score, recall, and precision were used to evaluate proposed system performance. The experimental results indicate that the proposed model is superior to other state-of-the-art models.
Collapse
Affiliation(s)
- Jincan Zhang
- College of Information Engineering, Henan University of Science and Technology, Luoyang, China
| | - Xinghua Tan
- College of Information Engineering, Henan University of Science and Technology, Luoyang, China
| | - Wenna Chen
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Ganqin Du
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Qizhi Fu
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Hongri Zhang
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Hongwei Jiang
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| |
Collapse
|
230
|
Zhang G, Luo Y, Dai X, Dai Z. Benchmarking deep learning methods for predicting CRISPR/Cas9 sgRNA on- and off-target activities. Brief Bioinform 2023; 24:bbad333. [PMID: 37775147 DOI: 10.1093/bib/bbad333] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 08/31/2023] [Accepted: 09/04/2023] [Indexed: 10/01/2023] Open
Abstract
In silico design of single guide RNA (sgRNA) plays a critical role in clustered regularly interspaced, short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9) system. Continuous efforts are aimed at improving sgRNA design with efficient on-target activity and reduced off-target mutations. In the last 5 years, an increasing number of deep learning-based methods have achieved breakthrough performance in predicting sgRNA on- and off-target activities. Nevertheless, it is worthwhile to systematically evaluate these methods for their predictive abilities. In this review, we conducted a systematic survey on the progress in prediction of on- and off-target editing. We investigated the performances of 10 mainstream deep learning-based on-target predictors using nine public datasets with different sample sizes. We found that in most scenarios, these methods showed superior predictive power on large- and medium-scale datasets than on small-scale datasets. In addition, we performed unbiased experiments to provide in-depth comparison of eight representative approaches for off-target prediction on 12 publicly available datasets with various imbalanced ratios of positive/negative samples. Most methods showed excellent performance on balanced datasets but have much room for improvement on moderate- and severe-imbalanced datasets. This study provides comprehensive perspectives on CRISPR/Cas9 sgRNA on- and off-target activity prediction and improvement for method development.
Collapse
Affiliation(s)
- Guishan Zhang
- College of Engineering, Shantou University, Shantou 515063, China
| | - Ye Luo
- College of Engineering, Shantou University, Shantou 515063, China
| | - Xianhua Dai
- School of Cyber Science and Technology, Sun Yat-sen University, Shenzhen 518107, China
- Southern Marine Science and Engineering Guangdong Laboratory, Zhuhai 519000, China
| | - Zhiming Dai
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China
- Guangdong Province Key Laboratory of Big Data Analysis and Processing, Sun Yat-sen University, Guangzhou 510006, China
| |
Collapse
|
231
|
Feng HW, Chen JJ, Zhang ZC, Zhang SC, Yang WH. Bibliometric analysis of artificial intelligence and optical coherence tomography images: research hotspots and frontiers. Int J Ophthalmol 2023; 16:1431-1440. [PMID: 37724282 PMCID: PMC10475613 DOI: 10.18240/ijo.2023.09.09] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 07/05/2023] [Indexed: 09/20/2023] Open
Abstract
AIM To explore the latest application of artificial intelligence (AI) in optical coherence tomography (OCT) images, and to analyze the current research status of AI in OCT, and discuss the future research trend. METHODS On June 1, 2023, a bibliometric analysis of the Web of Science Core Collection was performed in order to explore the utilization of AI in OCT imagery. Key parameters such as papers, countries/regions, citations, databases, organizations, keywords, journal names, and research hotspots were extracted and then visualized employing the VOSviewer and CiteSpace V bibliometric platforms. RESULTS Fifty-five nations reported studies on AI biotechnology and its application in analyzing OCT images. The United States was the country with the largest number of published papers. Furthermore, 197 institutions worldwide provided published articles, where University of London had more publications than the rest. The reference clusters from the study could be divided into four categories: thickness and eyes, diabetic retinopathy (DR), images and segmentation, and OCT classification. CONCLUSION The latest hot topics and future directions in this field are identified, and the dynamic evolution of AI-based OCT imaging are outlined. AI-based OCT imaging holds great potential for revolutionizing clinical care.
Collapse
Affiliation(s)
- Hai-Wen Feng
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang 110870, Liaoning Province, China
| | - Jun-Jie Chen
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang 110870, Liaoning Province, China
| | - Zhi-Chang Zhang
- Department of Computer, School of Intelligent Medicine, China Medical University, Shenyang 110122, Liaoning Province, China
| | - Shao-Chong Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, Guangdong Province, China
| | - Wei-Hua Yang
- Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, Guangdong Province, China
| |
Collapse
|
232
|
Wang H, Zhang J, Huang Y, Cai B. FBANet: Transfer Learning for Depression Recognition Using a Feature-Enhanced Bi-Level Attention Network. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1350. [PMID: 37761649 PMCID: PMC10529103 DOI: 10.3390/e25091350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 08/30/2023] [Accepted: 09/14/2023] [Indexed: 09/29/2023]
Abstract
The House-Tree-Person (HTP) sketch test is a psychological analysis technique designed to assess the mental health status of test subjects. Nowadays, there are mature methods for the recognition of depression using the HTP sketch test. However, existing works primarily rely on manual analysis of drawing features, which has the drawbacks of strong subjectivity and low automation. Only a small number of works automatically recognize depression using machine learning and deep learning methods, but their complex data preprocessing pipelines and multi-stage computational processes indicate a relatively low level of automation. To overcome the above issues, we present a novel deep learning-based one-stage approach for depression recognition in HTP sketches, which has a simple data preprocessing pipeline and calculation process with a high accuracy rate. In terms of data, we use a hand-drawn HTP sketch dataset, which contains drawings of normal people and patients with depression. In the model aspect, we design a novel network called Feature-Enhanced Bi-Level Attention Network (FBANet), which contains feature enhancement and bi-level attention modules. Due to the limited size of the collected data, transfer learning is employed, where the model is pre-trained on a large-scale sketch dataset and fine-tuned on the HTP sketch dataset. On the HTP sketch dataset, utilizing cross-validation, FBANet achieves a maximum accuracy of 99.07% on the validation dataset, with an average accuracy of 97.71%, outperforming traditional classification models and previous works. In summary, the proposed FBANet, after pre-training, demonstrates superior performance on the HTP sketch dataset and is expected to be a method for the auxiliary diagnosis of depression.
Collapse
Affiliation(s)
| | | | | | - Bo Cai
- Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan 430072, China; (H.W.); (J.Z.)
| |
Collapse
|
233
|
Yan S, Li J, Wang J, Liu G, Ai A, Liu R. A Novel Strategy for Extracting Richer Semantic Information Based on Fault Detection in Power Transmission Lines. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1333. [PMID: 37761632 PMCID: PMC10529342 DOI: 10.3390/e25091333] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 09/07/2023] [Accepted: 09/12/2023] [Indexed: 09/29/2023]
Abstract
With the development of the smart grid, the traditional defect detection methods in transmission lines are gradually shifted to the combination of robots or drones and deep learning technology to realize the automatic detection of defects, avoiding the risks and computational costs of manual detection. Lightweight embedded devices such as drones and robots belong to small devices with limited computational resources, while deep learning mostly relies on deep neural networks with huge computational resources. And semantic features of deep networks are richer, which are also critical for accurately classifying morphologically similar defects for detection, helping to identify differences and classify transmission line components. Therefore, we propose a method to obtain advanced semantic features even in shallow networks. Combined with transfer learning, we change the image features (e.g., position and edge connectivity) under self-supervised learning during pre-training. This allows the pre-trained model to learn potential semantic feature representations rather than relying on low-level features. The pre-trained model then directs a shallow network to extract rich semantic features for downstream tasks. In addition, we introduce a category semantic fusion module (CSFM) to enhance feature fusion by utilizing channel attention to capture global and local information lost during compression and extraction. This module helps to obtain more category semantic information. Our experiments on a self-created transmission line defect dataset show the superiority of modifying low-level image information during pre-training when adjusting the number of network layers and embedding of the CSFM. The strategy demonstrates generalization on the publicly available PASCAL VOC dataset. Finally, compared with state-of-the-art methods on the synthetic fog insulator dataset (SFID), the strategy achieves comparable performance with much smaller network depths.
Collapse
Affiliation(s)
- Shuxia Yan
- School of Electronics and Information Engineering, Tiangong University, Tianjin 300387, China
| | - Junhuan Li
- School of Electronics and Information Engineering, Tiangong University, Tianjin 300387, China
| | - Jiachen Wang
- College of Mechanical and Electronic Engineering, Northwest A&F University, Xianyang 712100, China;
| | - Gaohua Liu
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China;
| | - Anhai Ai
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China;
| | - Rui Liu
- School of Software, Tiangong University, Tianjin 300387, China
| |
Collapse
|
234
|
Liang C, Li X, Qin Y, Li M, Ma Y, Wang R, Xu X, Yu J, Lv S, Luo H. Effective automatic detection of anterior cruciate ligament injury using convolutional neural network with two attention mechanism modules. BMC Med Imaging 2023; 23:120. [PMID: 37697236 PMCID: PMC10494428 DOI: 10.1186/s12880-023-01091-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 08/30/2023] [Indexed: 09/13/2023] Open
Abstract
BACKGROUND To develop a fully automated CNN detection system based on magnetic resonance imaging (MRI) for ACL injury, and to explore the feasibility of CNN for ACL injury detection on MRI images. METHODS Including 313 patients aged 16 - 65 years old, the raw data are 368 pieces with injured ACL and 100 pieces with intact ACL. By adding flipping, rotation, scaling and other methods to expand the data, the final data set is 630 pieces including 355 pieces of injured ACL and 275 pieces of intact ACL. Using the proposed CNN model with two attention mechanism modules, data sets are trained and tested with fivefold cross-validation. RESULTS The performance is evaluated using accuracy, precision, sensitivity, specificity and F1 score of our proposed CNN model, with results of 0.8063, 0.7741, 0.9268, 0.6509 and 0.8436. The average accuracy in the fivefold cross-validation is 0.8064. For our model, the average area under curves (AUC) for detecting injured ACL has results of 0.8886. CONCLUSION We propose an effective and automatic CNN model to detect ACL injury from MRI of human knees. This model can effectively help clinicians diagnose ACL injury, improving diagnostic efficiency and reducing misdiagnosis and missed diagnosis.
Collapse
Affiliation(s)
- Chen Liang
- Department of Minimally Invasive Surgery and Sports Medicine, The 2Nd Affiliated Hospital of Harbin Medical University, Harbin, 150001, China
| | - Xiang Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Yong Qin
- Department of Minimally Invasive Surgery and Sports Medicine, The 2Nd Affiliated Hospital of Harbin Medical University, Harbin, 150001, China
| | - Minglei Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Yingkai Ma
- Department of Minimally Invasive Surgery and Sports Medicine, The 2Nd Affiliated Hospital of Harbin Medical University, Harbin, 150001, China
| | - Ren Wang
- Department of Minimally Invasive Surgery and Sports Medicine, The 2Nd Affiliated Hospital of Harbin Medical University, Harbin, 150001, China
| | - Xiangning Xu
- Department of Minimally Invasive Surgery and Sports Medicine, The 2Nd Affiliated Hospital of Harbin Medical University, Harbin, 150001, China
| | - Jinping Yu
- Department of Minimally Invasive Surgery and Sports Medicine, The 2Nd Affiliated Hospital of Harbin Medical University, Harbin, 150001, China
| | - Songcen Lv
- Department of Minimally Invasive Surgery and Sports Medicine, The 2Nd Affiliated Hospital of Harbin Medical University, Harbin, 150001, China.
| | - Hao Luo
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China.
| |
Collapse
|
235
|
Buzea CG, Buga R, Paun MA, Albu M, Iancu DT, Dobrovat B, Agop M, Paun VP, Eva L. AI Evaluation of Imaging Factors in the Evolution of Stage-Treated Metastases Using Gamma Knife. Diagnostics (Basel) 2023; 13:2853. [PMID: 37685391 PMCID: PMC10486549 DOI: 10.3390/diagnostics13172853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 08/15/2023] [Accepted: 08/30/2023] [Indexed: 09/10/2023] Open
Abstract
BACKGROUND The study investigated whether three deep-learning models, namely, the CNN_model (trained from scratch), the TL_model (transfer learning), and the FT_model (fine-tuning), could predict the early response of brain metastases (BM) to radiosurgery using a minimal pre-processing of the MRI images. The dataset consisted of 19 BM patients who underwent stereotactic-radiosurgery (SRS) within 3 months. The images used included axial fluid-attenuated inversion recovery (FLAIR) sequences and high-resolution contrast-enhanced T1-weighted (CE T1w) sequences from the tumor center. The patients were classified as responders (complete or partial response) or non-responders (stable or progressive disease). METHODS A total of 2320 images from the regression class and 874 from the progression class were randomly assigned to training, testing, and validation groups. The DL models were trained using the training-group images and labels, and the validation dataset was used to select the best model for classifying the evaluation images as showing regression or progression. RESULTS Among the 19 patients, 15 were classified as "responders" and 4 as "non-responders". The CNN_model achieved good performance for both classes, showing high precision, recall, and F1-scores. The overall accuracy was 0.98, with an AUC of 0.989. The TL_model performed well in identifying the "progression" class, but could benefit from improved precision, while the "regression" class exhibited high precision, but lower recall. The overall accuracy of the TL_model was 0.92, and the AUC was 0.936. The FT_model showed high recall for "progression", but low precision, and for the "regression" class, it exhibited a high precision, but lower recall. The overall accuracy for the FT_model was 0.83, with an AUC of 0.885. CONCLUSIONS Among the three models analyzed, the CNN_model, trained from scratch, provided the most accurate predictions of SRS responses for unlearned BM images. This suggests that CNN models could potentially predict SRS prognoses from small datasets. However, further analysis is needed, especially in cases where class imbalances exist.
Collapse
Affiliation(s)
- Calin G. Buzea
- Clinical Emergency Hospital “Prof. Dr. Nicolae Oblu”, 700309 Iasi, Romania; (C.G.B.); (R.B.); (M.A.); (D.T.I.); (B.D.); (L.E.)
- National Institute of Research and Development for Technical Physics, IFT, 700050 Iasi, Romania
| | - Razvan Buga
- Clinical Emergency Hospital “Prof. Dr. Nicolae Oblu”, 700309 Iasi, Romania; (C.G.B.); (R.B.); (M.A.); (D.T.I.); (B.D.); (L.E.)
- Department of Medical Oncology and Radiotherapy, University of Medicine and Pharmacy “Grigore T. Popa”, 700115 Iasi, Romania
| | - Maria-Alexandra Paun
- Division Radio Monitoring and Equipment, Section Market Access and Conformity, Federal Office of Communications OFCOM, Avenue de l’Avenir 44, CH-2501 Biel/Bienne, Switzerland;
| | - Madalina Albu
- Clinical Emergency Hospital “Prof. Dr. Nicolae Oblu”, 700309 Iasi, Romania; (C.G.B.); (R.B.); (M.A.); (D.T.I.); (B.D.); (L.E.)
| | - Dragos T. Iancu
- Clinical Emergency Hospital “Prof. Dr. Nicolae Oblu”, 700309 Iasi, Romania; (C.G.B.); (R.B.); (M.A.); (D.T.I.); (B.D.); (L.E.)
- Department of Medical Oncology and Radiotherapy, University of Medicine and Pharmacy “Grigore T. Popa”, 700115 Iasi, Romania
- Regional Institute of Oncology, 700483 Iasi, Romania
| | - Bogdan Dobrovat
- Clinical Emergency Hospital “Prof. Dr. Nicolae Oblu”, 700309 Iasi, Romania; (C.G.B.); (R.B.); (M.A.); (D.T.I.); (B.D.); (L.E.)
- Department of Medical Oncology and Radiotherapy, University of Medicine and Pharmacy “Grigore T. Popa”, 700115 Iasi, Romania
| | - Maricel Agop
- Physics Department, Technical University “Gheorghe Asachi” Iasi, 700050 Iasi, Romania;
| | - Viorel-Puiu Paun
- Physics Department, Faculty of Applied Sciences, University Politehnica of Bucharest, 060042 Bucharest, Romania
- Romanian Scientists Academy, 54 Splaiul Independentei, 050094 Bucharest, Romania
| | - Lucian Eva
- Clinical Emergency Hospital “Prof. Dr. Nicolae Oblu”, 700309 Iasi, Romania; (C.G.B.); (R.B.); (M.A.); (D.T.I.); (B.D.); (L.E.)
- Faculty of Dental Medicine, Universitatea Apollonia, 700399 Iasi, Romania
| |
Collapse
|
236
|
Wang D, Wang X, Wang L, Li M, Da Q, Liu X, Gao X, Shen J, He J, Shen T, Duan Q, Zhao J, Li K, Qiao Y, Zhang S. A Real-world Dataset and Benchmark For Foundation Model Adaptation in Medical Image Classification. Sci Data 2023; 10:574. [PMID: 37660106 PMCID: PMC10475041 DOI: 10.1038/s41597-023-02460-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 08/09/2023] [Indexed: 09/04/2023] Open
Abstract
Foundation models, often pre-trained with large-scale data, have achieved paramount success in jump-starting various vision and language applications. Recent advances further enable adapting foundation models in downstream tasks efficiently using only a few training samples, e.g., in-context learning. Yet, the application of such learning paradigms in medical image analysis remains scarce due to the shortage of publicly accessible data and benchmarks. In this paper, we aim at approaches adapting the foundation models for medical image classification and present a novel dataset and benchmark for the evaluation, i.e., examining the overall performance of accommodating the large-scale foundation models downstream on a set of diverse real-world clinical tasks. We collect five sets of medical imaging data from multiple institutes targeting a variety of real-world clinical tasks (22,349 images in total), i.e., thoracic diseases screening in X-rays, pathological lesion tissue screening, lesion detection in endoscopy images, neonatal jaundice evaluation, and diabetic retinopathy grading. Results of multiple baseline methods are demonstrated using the proposed dataset from both accuracy and cost-effective perspectives.
Collapse
Affiliation(s)
- Dequan Wang
- Shanghai AI Laboratory, Shanghai, China
- Shanghai Jiaotong University, Shanghai, China
| | | | | | | | - Qian Da
- Shanghai Ruijing Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaoqiang Liu
- Shanghai Tenth People's Hospital of Tongji University, Shanghai, China
| | | | - Jun Shen
- Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Junjun He
- Shanghai AI Laboratory, Shanghai, China
| | | | - Qi Duan
- Sensetime Research, Shanghai, China
| | - Jie Zhao
- The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Kang Li
- Shanghai AI Laboratory, Shanghai, China
- West China Hospital, Sichuan University, Chengdu, China
| | - Yu Qiao
- Shanghai AI Laboratory, Shanghai, China.
| | | |
Collapse
|
237
|
Wang L, Yang F, Bao X, Bo X, Dang S, Wang R, Pan F. Deep learning-mediated prediction of concealed accessory pathway based on sinus rhythmic electrocardiograms. Ann Noninvasive Electrocardiol 2023; 28:e13072. [PMID: 37530078 PMCID: PMC10475885 DOI: 10.1111/anec.13072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 06/01/2023] [Accepted: 06/27/2023] [Indexed: 08/03/2023] Open
Abstract
BACKGROUND Concealed accessory pathway (AP) may cause atrial ventricular reentrant tachycardia impacting the health of patients. However, it is asymptomatic and undetectable during sinus rhythm. METHODS To detect concealed AP with electrocardiography (ECG) images, we collected normal sinus rhythmic ECG images of concealed AP patients and healthy subjects. All ECG images were randomly allocated to the training and testing datasets, and were used to train and test six popular convolutional neural networks from ImageNet pre-training and random initialization, respectively. RESULTS We screened 152 ECG recordings in concealed AP group and 600 ECG recordings in control group. There were no statistically significant differences in ECG characteristics between control group and concealed AP group in terms of PR interval and QRS interval. However, the QT interval and QTc were slightly higher in control group than in concealed AP group. In the testing set, ResNet26, SE-ResNet50, MobileNetV3_large_100, and DenseNet169 achieved a sensitivity rate more than 87.0% with a specificity rate above 98.0%. And models trained from random initialization showed similar performance and convergence with models trained from ImageNet pre-training. CONCLUSION Our study suggests that deep learning could be an effective way to predict concealed AP with normal sinus rhythmic ECG images. And our results might encourage people to rethink the possibility of training from random initialization on ECG image tasks.
Collapse
Affiliation(s)
- Lei Wang
- Department of CardiologyThe Affiliated Wuxi People's Hospital of Nanjing Medical UniversityWuxiChina
- Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education)Jiangnan UniversityWuxiChina
| | - Fang Yang
- Department of CardiologyThe Affiliated Wuxi People's Hospital of Nanjing Medical UniversityWuxiChina
| | - Xiao‐Jing Bao
- Department of CardiologyThe Affiliated Wuxi People's Hospital of Nanjing Medical UniversityWuxiChina
| | - Xiao‐Ping Bo
- Department of CardiologyThe Affiliated Wuxi People's Hospital of Nanjing Medical UniversityWuxiChina
| | - Shipeng Dang
- Department of CardiologyThe Affiliated Wuxi People's Hospital of Nanjing Medical UniversityWuxiChina
| | - Ru‐Xing Wang
- Department of CardiologyThe Affiliated Wuxi People's Hospital of Nanjing Medical UniversityWuxiChina
| | - Feng Pan
- Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education)Jiangnan UniversityWuxiChina
| |
Collapse
|
238
|
Liu M, Liu H, Wu T, Zhu Y, Zhou Y, Huang Z, Xiang C, Huang J. ACP-Dnnel: anti-coronavirus peptides' prediction based on deep neural network ensemble learning. Amino Acids 2023; 55:1121-1136. [PMID: 37402073 DOI: 10.1007/s00726-023-03300-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 06/25/2023] [Indexed: 07/05/2023]
Abstract
The ongoing COVID-19 pandemic has caused dramatic loss of human life. There is an urgent need for safe and efficient anti-coronavirus infection drugs. Anti-coronavirus peptides (ACovPs) can inhibit coronavirus infection. With high-efficiency, low-toxicity, and broad-spectrum inhibitory effects on coronaviruses, they are promising candidates to be developed into a new type of anti-coronavirus drug. Experiment is the traditional way of ACovPs' identification, which is less efficient and more expensive. With the accumulation of experimental data on ACovPs, computational prediction provides a cheaper and faster way to find anti-coronavirus peptides' candidates. In this study, we ensemble several state-of-the-art machine learning methodologies to build nine classification models for the prediction of ACovPs. These models were pre-trained using deep neural networks, and the performance of our ensemble model, ACP-Dnnel, was evaluated across three datasets and independent dataset. We followed Chou's 5-step rules. (1) we constructed the benchmark datasets data1, data2, and data3 for training and testing, and introduced the independent validation dataset ACVP-M; (2) we analyzed the peptides sequence composition feature of the benchmark dataset; (3) we constructed the ACP-Dnnel model with deep convolutional neural network (DCNN) merged the bi-directional long short-term memory (BiLSTM) as the base model for pre-training to extract the features embedded in the benchmark dataset, and then, nine classification algorithms were introduced to ensemble together for classification prediction and voting together; (4) tenfold cross-validation was introduced during the training process, and the final model performance was evaluated; (5) finally, we constructed a user-friendly web server accessible to the public at http://150.158.148.228:5000/ . The highest accuracy (ACC) of ACP-Dnnel reaches 97%, and the Matthew's correlation coefficient (MCC) value exceeds 0.9. On three different datasets, its average accuracy is 96.0%. After the latest independent dataset validation, ACP-Dnnel improved at MCC, SP, and ACC values 6.2%, 7.5% and 6.3% greater, respectively. It is suggested that ACP-Dnnel can be helpful for the laboratory identification of ACovPs, speeding up the anti-coronavirus peptide drug discovery and development. We constructed the web server of anti-coronavirus peptides' prediction and it is available at http://150.158.148.228:5000/ .
Collapse
Affiliation(s)
- Mingyou Liu
- School of Biology and Engineering, Guizhou Medical University, Guiyang, Guizhou, China
- School of Life Science and Technology, University of Electronic Science and Technology, Chengdu, Sichuan, China
| | - Hongmei Liu
- School of Biology and Engineering, Guizhou Medical University, Guiyang, Guizhou, China
| | - Tao Wu
- School of Biology and Engineering, Guizhou Medical University, Guiyang, Guizhou, China
| | - Yingxue Zhu
- School of Biology and Engineering, Guizhou Medical University, Guiyang, Guizhou, China
| | - Yuwei Zhou
- School of Life Science and Technology, University of Electronic Science and Technology, Chengdu, Sichuan, China
| | - Ziru Huang
- School of Life Science and Technology, University of Electronic Science and Technology, Chengdu, Sichuan, China
| | - Changcheng Xiang
- School of Computer Science and Technology, Aba Teachers University, Aba, Sichuan, China.
| | - Jian Huang
- School of Life Science and Technology, University of Electronic Science and Technology, Chengdu, Sichuan, China.
- School of Healthcare Technology, Chengdu Neusoft University, Chengdu, Sichuan, China.
| |
Collapse
|
239
|
Zhao J, Xing Z, Chen Z, Wan L, Han T, Fu H, Zhu L. Uncertainty-Aware Multi-Dimensional Mutual Learning for Brain and Brain Tumor Segmentation. IEEE J Biomed Health Inform 2023; 27:4362-4372. [PMID: 37155398 DOI: 10.1109/jbhi.2023.3274255] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Existing segmentation methods for brain MRI data usually leverage 3D CNNs on 3D volumes or employ 2D CNNs on 2D image slices. We discovered that while volume-based approaches well respect spatial relationships across slices, slice-based methods typically excel at capturing fine local features. Furthermore, there is a wealth of complementary information between their segmentation predictions. Inspired by this observation, we develop an Uncertainty-aware Multi-dimensional Mutual learning framework to learn different dimensional networks simultaneously, each of which provides useful soft labels as supervision to the others, thus effectively improving the generalization ability. Specifically, our framework builds upon a 2D-CNN, a 2.5D-CNN, and a 3D-CNN, while an uncertainty gating mechanism is leveraged to facilitate the selection of qualified soft labels, so as to ensure the reliability of shared information. The proposed method is a general framework and can be applied to varying backbones. The experimental results on three datasets demonstrate that our method can significantly enhance the performance of the backbone network by notable margins, achieving a Dice metric improvement of 2.8% on MeniSeg, 1.4% on IBSR, and 1.3% on BraTS2020.
Collapse
|
240
|
Kim B, Lee GY, Park SH. Attention fusion network with self-supervised learning for staging of osteonecrosis of the femoral head (ONFH) using multiple MR protocols. Med Phys 2023; 50:5528-5540. [PMID: 36945733 DOI: 10.1002/mp.16380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 11/21/2022] [Accepted: 02/20/2023] [Indexed: 03/23/2023] Open
Abstract
BACKGROUND Osteonecrosis of the femoral head (ONFH) is characterized as bone cell death in the hip joint, involving a severe pain in the groin. The staging of ONFH is commonly based on Magnetic resonance imaging and computed tomography (CT), which are important for establishing effective treatment plans. There have been some attempts to automate ONFH staging using deep learning, but few of them used only MR images. PURPOSE To propose a deep learning model for MR-only ONFH staging, which can reduce additional cost and radiation exposure from the acquisition of CT images. METHODS We integrated information from the MR images of five different imaging protocols by a newly proposed attention fusion method, which was composed of intra-modality attention and inter-modality attention. In addition, a self-supervised learning was used to learn deep representations from a large amount of paired MR-CT dataset. The encoder part of the MR-CT translation network was used as a pretraining network for the staging, which aimed to overcome the lack of annotated data for staging. Ablation studies were performed to investigate the contributions of each proposed method. The area under the receiver operating characteristic curve (AUROC) was used to evaluate the performance of the networks. RESULTS Our model improved the performance of the four-way classification of the association research circulation osseous (ARCO) stage using MR images of the multiple protocols by 6.8%p in AUROC over a plain VGG network. Each proposed method increased the performance by 4.7%p (self-supervised learning) and 2.6%p (attention fusion) in AUROC, which was demonstrated by the ablation experiments. CONCLUSIONS We have shown the feasibility of the MR-only ONFH staging by using self-supervised learning and attention fusion. A large amount of paired MR-CT data in hospitals can be used to further improve the performance of the staging, and the proposed method has potential to be used in the diagnosis of various diseases that require staging from multiple MR protocols.
Collapse
Affiliation(s)
- Bomin Kim
- Department of Bio and Brain Engineering, KAIST, Daejeon, Republic of Korea
| | - Geun Young Lee
- Department of Radiology, Chung-Ang University Gwangmyeong Hospital, Chung-Ang University College of Medicine, Republic of Korea
| | - Sung-Hong Park
- Department of Bio and Brain Engineering, KAIST, Daejeon, Republic of Korea
| |
Collapse
|
241
|
Shen L, Gao C, Hu S, Kang D, Zhang Z, Xia D, Xu Y, Xiang S, Zhu Q, Xu G, Tang F, Yue H, Yu W, Zhang Z. Using Artificial Intelligence to Diagnose Osteoporotic Vertebral Fractures on Plain Radiographs. J Bone Miner Res 2023; 38:1278-1287. [PMID: 37449775 DOI: 10.1002/jbmr.4879] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 06/18/2023] [Accepted: 07/06/2023] [Indexed: 07/18/2023]
Abstract
Osteoporotic vertebral fracture (OVF) is a risk factor for morbidity and mortality in elderly population, and accurate diagnosis is important for improving treatment outcomes. OVF diagnosis suffers from high misdiagnosis and underdiagnosis rates, as well as high workload. Deep learning methods applied to plain radiographs, a simple, fast, and inexpensive examination, might solve this problem. We developed and validated a deep-learning-based vertebral fracture diagnostic system using area loss ratio, which assisted a multitasking network to perform skeletal position detection and segmentation and identify and grade vertebral fractures. As the training set and internal validation set, we used 11,397 plain radiographs from six community centers in Shanghai. For the external validation set, 1276 participants were recruited from the outpatient clinic of the Shanghai Sixth People's Hospital (1276 plain radiographs). Radiologists performed all X-ray images and used the Genant semiquantitative tool for fracture diagnosis and grading as the ground truth data. Accuracy, sensitivity, specificity, positive predictive value, and negative predictive value were used to evaluate diagnostic performance. The AI_OVF_SH system demonstrated high accuracy and computational speed in skeletal position detection and segmentation. In the internal validation set, the accuracy, sensitivity, and specificity with the AI_OVF_SH model were 97.41%, 84.08%, and 97.25%, respectively, for all fractures. The sensitivity and specificity for moderate fractures were 88.55% and 99.74%, respectively, and for severe fractures, they were 92.30% and 99.92%. In the external validation set, the accuracy, sensitivity, and specificity for all fractures were 96.85%, 83.35%, and 94.70%, respectively. For moderate fractures, the sensitivity and specificity were 85.61% and 99.85%, respectively, and 93.46% and 99.92% for severe fractures. Therefore, the AI_OVF_SH system is an efficient tool to assist radiologists and clinicians to improve the diagnosing of vertebral fractures. © 2023 The Authors. Journal of Bone and Mineral Research published by Wiley Periodicals LLC on behalf of American Society for Bone and Mineral Research (ASBMR).
Collapse
Affiliation(s)
- Li Shen
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Clinical Research Center, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Chao Gao
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shundong Hu
- Department of Radiology, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Dan Kang
- Shanghai Jiyinghui Intelligent Technology Co, Shanghai, China
| | - Zhaogang Zhang
- Shanghai Jiyinghui Intelligent Technology Co, Shanghai, China
| | - Dongdong Xia
- Department of Orthopaedics, Ning Bo First Hospital, Zhejiang, China
| | - Yiren Xu
- Department of Radiology, Ning Bo First Hospital, Zhejiang, China
| | - Shoukui Xiang
- Department of Endocrinology and Metabolism, The First People's Hospital of Changzhou, Changzhou, China
| | - Qiong Zhu
- Kangjian Community Health Service Center, Shanghai, China
| | - GeWen Xu
- Kangjian Community Health Service Center, Shanghai, China
| | - Feng Tang
- Jinhui Community Health Service Center, Shanghai, China
| | - Hua Yue
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wei Yu
- Department of Radiology, Peking Union Medical College Hospital, Beijing, China
| | - Zhenlin Zhang
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Clinical Research Center, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
242
|
Patil S, Joda T, Soffe B, Awan KH, Fageeh HN, Tovani-Palone MR, Licari FW. Efficacy of artificial intelligence in the detection of periodontal bone loss and classification of periodontal diseases: A systematic review. J Am Dent Assoc 2023; 154:795-804.e1. [PMID: 37452813 DOI: 10.1016/j.adaj.2023.05.010] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 05/13/2023] [Accepted: 05/17/2023] [Indexed: 07/18/2023]
Abstract
BACKGROUND Artificial intelligence (AI) can aid in the diagnosis and treatment planning of periodontal disease by means of reducing subjectivity. This systematic review aimed to evaluate the efficacy of AI models in detecting radiographic periodontal bone loss (PBL) and accuracy in classifying lesions. TYPES OF STUDIES REVIEWED The authors conducted an electronic search of PubMed, Scopus, and Web of Science for articles published through August 2022. Articles evaluating the efficacy of AI in determining PBL were included. The authors assessed the articles using the Quality Assessment for Studies of Diagnostic Accuracy tool. They used the Grading of Recommendations Assessment, Development and Evaluation criteria to evaluate the certainty of evidence. RESULTS Of the 13 articles identified through electronic search, 6 studies met the inclusion criteria, using a variety of AI algorithms and different modalities, including panoramic and intraoral radiographs. Sensitivity, specificity, accuracy, and pixel accuracy were the outcomes measured. Although some studies found no substantial difference between AI and dental clinicians' performance, others showed AI's superiority in detecting PBL. Evidence suggests that AI has the potential to aid in the detection of PBL and classification of periodontal diseases. However, further research is needed to standardize AI algorithms and validate their clinical usefulness. PRACTICAL IMPLICATIONS Although the use of AI may offer some benefits in the detection and classification of periodontal diseases, the low level of evidence and the inconsistent performance of AI algorithms suggest that caution should be exercised when considering the use of AI models in diagnosing PBL. This review was registered at PROSPERO (CRD42022364600).
Collapse
|
243
|
Orhan K, Aktuna Belgin C, Manulis D, Golitsyna M, Bayrak S, Aksoy S, Sanders A, Önder M, Ezhov M, Shamshiev M, Gusarev M, Shlenskii V. Determining the reliability of diagnosis and treatment using artificial intelligence software with panoramic radiographs. Imaging Sci Dent 2023; 53:199-208. [PMID: 37799743 PMCID: PMC10548159 DOI: 10.5624/isd.20230109] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 07/07/2023] [Accepted: 07/10/2023] [Indexed: 10/07/2023] Open
Abstract
Purpose The objective of this study was to evaluate the accuracy and effectiveness of an artificial intelligence (AI) program in identifying dental conditions using panoramic radiographs (PRs), as well as to assess the appropriateness of its treatment recommendations. Material and Methods PRs from 100 patients (representing 4497 teeth) with known clinical examination findings were randomly selected from a university database. Three dentomaxillofacial radiologists and the Diagnocat AI software evaluated these PRs. The evaluations were focused on various dental conditions and treatments, including canal filling, caries, cast post and core, dental calculus, fillings, furcation lesions, implants, lack of interproximal tooth contact, open margins, overhangs, periapical lesions, periodontal bone loss, short fillings, voids in root fillings, overfillings, pontics, root fragments, impacted teeth, artificial crowns, missing teeth, and healthy teeth. Results The AI demonstrated almost perfect agreement (exceeding 0.81) in most of the assessments when compared to the ground truth. The sensitivity was very high (above 0.8) for the evaluation of healthy teeth, artificial crowns, dental calculus, missing teeth, fillings, lack of interproximal contact, periodontal bone loss, and implants. However, the sensitivity was low for the assessment of caries, periapical lesions, pontic voids in the root canal, and overhangs. Conclusion Despite the limitations of this study, the synthesized data suggest that AI-based decision support systems can serve as a valuable tool in detecting dental conditions, when used with PR for clinical dental applications.
Collapse
Affiliation(s)
- Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey
| | - Ceren Aktuna Belgin
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Hatay Mustafa Kemal University, Hatay, Turkey
| | | | | | - Seval Bayrak
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Abant İzzet Baysal University, Bolu, Turkey
| | - Secil Aksoy
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | | | - Merve Önder
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey
| | | | | | | | | |
Collapse
|
244
|
Meneses JP, Arrieta C, Della Maggiora G, Besa C, Urbina J, Arrese M, Gana JC, Galgani JE, Tejos C, Uribe S. Liver PDFF estimation using a multi-decoder water-fat separation neural network with a reduced number of echoes. Eur Radiol 2023; 33:6557-6568. [PMID: 37014405 PMCID: PMC10415440 DOI: 10.1007/s00330-023-09576-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 03/09/2023] [Accepted: 03/20/2023] [Indexed: 04/05/2023]
Abstract
OBJECTIVE To accurately estimate liver PDFF from chemical shift-encoded (CSE) MRI using a deep learning (DL)-based Multi-Decoder Water-Fat separation Network (MDWF-Net), that operates over complex-valued CSE-MR images with only 3 echoes. METHODS The proposed MDWF-Net and a U-Net model were independently trained using the first 3 echoes of MRI data from 134 subjects, acquired with conventional 6-echoes abdomen protocol at 1.5 T. Resulting models were then evaluated using unseen CSE-MR images obtained from 14 subjects that were acquired with a 3-echoes CSE-MR pulse sequence with a shorter duration compared to the standard protocol. Resulting PDFF maps were qualitatively assessed by two radiologists, and quantitatively assessed at two corresponding liver ROIs, using Bland Altman and regression analysis for mean values, and ANOVA testing for standard deviation (STD) (significance level: .05). A 6-echo graph cut was considered ground truth. RESULTS Assessment of radiologists demonstrated that, unlike U-Net, MDWF-Net had a similar quality to the ground truth, despite it considered half of the information. Regarding PDFF mean values at ROIs, MDWF-Net showed a better agreement with ground truth (regression slope = 0.94, R2 = 0.97) than U-Net (regression slope = 0.86, R2 = 0.93). Moreover, ANOVA post hoc analysis of STDs showed a statistical difference between graph cuts and U-Net (p < .05), unlike MDWF-Net (p = .53). CONCLUSION MDWF-Net showed a liver PDFF accuracy comparable to the reference graph cut method, using only 3 echoes and thus allowing a reduction in the acquisition times. CLINICAL RELEVANCE STATEMENT We have prospectively validated that the use of a multi-decoder convolutional neural network to estimate liver proton density fat fraction allows a significant reduction in MR scan time by reducing the number of echoes required by 50%. KEY POINTS • Novel water-fat separation neural network allows for liver PDFF estimation by using multi-echo MR images with a reduced number of echoes. • Prospective single-center validation demonstrated that echo reduction leads to a significant shortening of the scan time, compared to standard 6-echo acquisition. • Qualitative and quantitative performance of the proposed method showed no significant differences in PDFF estimation with respect to the reference technique.
Collapse
Affiliation(s)
- Juan Pablo Meneses
- Biomedical Imaging Center, Pontificia Universidad Católica de Chile, Santiago, Chile
- Millennium Institute for Intelligent Healthcare Engineering iHEALTH, Santiago, Chile
- Department of Electrical Engineering, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Cristobal Arrieta
- Millennium Institute for Intelligent Healthcare Engineering iHEALTH, Santiago, Chile
- Faculty of Engineering, Universidad Alberto Hurtado, Santiago, Chile
| | | | - Cecilia Besa
- Millennium Institute for Intelligent Healthcare Engineering iHEALTH, Santiago, Chile
- Department of Radiology, School of Medicine, Pontificia Universidad Catolica de Chile, Santiago, Chile
| | - Jesús Urbina
- Biomedical Imaging Center, Pontificia Universidad Católica de Chile, Santiago, Chile
- Millennium Institute for Intelligent Healthcare Engineering iHEALTH, Santiago, Chile
- Complejo Asistencial Dr. Sótero del Río, Santiago, Chile
| | - Marco Arrese
- Department of Gastroenterology, School of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Juan Cristóbal Gana
- Department of Pediatric Gastroenterology and Nutrition, Division of Pediatrics, School of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Jose E Galgani
- Department of Health Sciences, Nutrition and Dietetics Career, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
- Department of Nutrition, Diabetes and Metabolism, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Cristian Tejos
- Biomedical Imaging Center, Pontificia Universidad Católica de Chile, Santiago, Chile
- Millennium Institute for Intelligent Healthcare Engineering iHEALTH, Santiago, Chile
- Department of Electrical Engineering, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Sergio Uribe
- Biomedical Imaging Center, Pontificia Universidad Católica de Chile, Santiago, Chile.
- Millennium Institute for Intelligent Healthcare Engineering iHEALTH, Santiago, Chile.
- Department of Radiology, School of Medicine, Pontificia Universidad Catolica de Chile, Santiago, Chile.
- Department of Medical Imaging and Radiation Sciences, School of Primary and Allied Health Care, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia.
| |
Collapse
|
245
|
Mansour M, Cumak EN, Kutlu M, Mahmud S. Deep learning based suture training system. Surg Open Sci 2023; 15:1-11. [PMID: 37601890 PMCID: PMC10432819 DOI: 10.1016/j.sopen.2023.07.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 07/21/2023] [Accepted: 07/29/2023] [Indexed: 08/22/2023] Open
Abstract
Background and objectives Surgical suturing is a fundamental skill that all medical and dental students learn during their education. Currently, the grading of students' suture skills in the medical faculty during general surgery training is relative, and students do not have the opportunity to learn specific techniques. Recent technological advances, however, have made it possible to classify and measure suture skills using artificial intelligence methods, such as Deep Learning (DL). This work aims to evaluate the success of surgical suture using DL techniques. Methods Six Convolutional Neural Network (CNN) models: VGG16, VGG19, Xception, Inception, MobileNet, and DensNet. We used a dataset of suture images containing two classes: successful and unsuccessful, and applied statistical metrics to compare the precision, recall, and F1 scores of the models. Results The results showed that Xception had the highest accuracy at 95 %, followed by MobileNet at 91 %, DensNet at 90 %, Inception at 84 %, VGG16 at 73 %, and VGG19 at 61 %. We also developed a graphical user interface that allows users to evaluate suture images by uploading them or using the camera. The images are then interpreted by the DL models, and the results are displayed on the screen. Conclusions The initial findings suggest that the use of DL techniques can minimize errors due to inexperience and allow physicians to use their time more efficiently by digitizing the process.
Collapse
Affiliation(s)
- Mohammed Mansour
- Department of Mechatronics Engineering, Sakarya University of Applied Sciences, Sakarya, Turkey
| | - Eda Nur Cumak
- Department of Mechatronics Engineering, Sakarya University of Applied Sciences, Sakarya, Turkey
| | - Mustafa Kutlu
- Department of Mechatronics Engineering, Sakarya University of Applied Sciences, Sakarya, Turkey
| | - Shekhar Mahmud
- Department of Systems Engineering, Military Technological College, Muscat, Oman
| |
Collapse
|
246
|
Iakunchykova O, Schirmer H, Vangberg T, Wang Y, Benavente ED, van Es R, van de Leur RR, Lindekleiv H, Attia ZI, Lopez-Jimenez F, Leon DA, Wilsgaard T. Machine-learning-derived heart and brain age are independently associated with cognition. Eur J Neurol 2023; 30:2611-2619. [PMID: 37254942 DOI: 10.1111/ene.15902] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 05/03/2023] [Accepted: 05/28/2023] [Indexed: 06/01/2023]
Abstract
BACKGROUND AND PURPOSE A heart age biomarker has been developed using deep neural networks applied to electrocardiograms. Whether this biomarker is associated with cognitive function was investigated. METHODS Using 12-lead electrocardiograms, heart age was estimated for a population-based sample (N = 7779, age 40-85 years, 45.3% men). Associations between heart delta age (HDA) and cognitive test scores were studied adjusted for cardiovascular risk factors. In addition, the relationship between HDA, brain delta age (BDA) and cognitive test scores was investigated in mediation analysis. RESULTS Significant associations between HDA and the Word test, Digit Symbol Coding Test and tapping test scores were found. HDA was correlated with BDA (Pearson's r = 0.12, p = 0.0001). Moreover, 13% (95% confidence interval 3-36) of the HDA effect on the tapping test score was mediated through BDA. DISCUSSION Heart delta age, representing the cumulative effects of life-long exposures, was associated with brain age. HDA was associated with cognitive function that was minimally explained through BDA.
Collapse
Affiliation(s)
- Olena Iakunchykova
- Center for Lifespan Changes in Brain and Cognition, Department of Psychology, University of Oslo, Oslo, Norway
| | - Henrik Schirmer
- Akershus University Hospital, Lørenskog, Norway
- Institute of Clinical Medicine, Campus Ahus, University of Oslo, Oslo, Norway
| | - Torgil Vangberg
- Department of Clinical Medicine, UiT The Arctic University of Norway, Tromsø, Norway
- PET Imaging Center, University Hospital of North Norway, Tromsø, Norway
| | - Yunpeng Wang
- Center for Lifespan Changes in Brain and Cognition, Department of Psychology, University of Oslo, Oslo, Norway
| | - Ernest D Benavente
- Department of Experimental Cardiology, University Medical Center, Utrecht, The Netherlands
| | - René van Es
- Department of Cardiology, University Medical Center, Utrecht, The Netherlands
| | | | - Haakon Lindekleiv
- University Hospital of North Norway, Tromsø, Norway
- Department of Community Medicine, UiT The Arctic University of Norway, Tromsø, Norway
| | - Zachi I Attia
- Mayo Clinic College of Medicine, Rochester, Minnesota, USA
| | | | - David A Leon
- Department of Noncommunicable Disease Epidemiology, London School of Hygiene and Tropical Medicine, London, UK
| | - Tom Wilsgaard
- Department of Community Medicine, UiT The Arctic University of Norway, Tromsø, Norway
| |
Collapse
|
247
|
Tu DY, Lin PC, Chou HH, Shen MR, Hsieh SY. Slice-Fusion: Reducing False Positives in Liver Tumor Detection for Mask R-CNN. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:3267-3277. [PMID: 37027274 DOI: 10.1109/tcbb.2023.3265394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Automatic liver tumor detection from computed tomography (CT) makes clinical examinations more accurate. However, deep learning-based detection algorithms are characterized by high sensitivity and low precision, which hinders diagnosis given that false-positive tumors must first be identified and excluded. These false positives arise because detection models incorrectly identify partial volume artifacts as lesions, which in turn stems from the inability to learn the perihepatic structure from a global perspective. To overcome this limitation, we propose a novel slice-fusion method in which mining the global structural relationship between the tissues in the target CT slices and fusing the features of adjacent slices according to the importance of the tissues. Furthermore, we design a new network based on our slice-fusion method and Mask R-CNN detection model, called Pinpoint-Net. We evaluated proposed model on the Liver Tumor Segmentation Challenge (LiTS) dataset and our liver metastases dataset. Experiments demonstrated that our slice-fusion method not only enhance tumor detection ability via reducing the number of false-positive tumors smaller than 10mm, but also improve segmentation performance. Without bells and whistles, a single Pinpoint-Net showed outstanding performance in liver tumor detection and segmentation on LiTS test dataset compared with other state-of-the-art models.
Collapse
|
248
|
Pielsticker L, Nicholls RL, DeBeer S, Greiner M. Convolutional neural network framework for the automated analysis of transition metal X-ray photoelectron spectra. Anal Chim Acta 2023; 1271:341433. [PMID: 37328241 DOI: 10.1016/j.aca.2023.341433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 05/15/2023] [Accepted: 05/26/2023] [Indexed: 06/18/2023]
Abstract
X-ray photoelectron spectroscopy is an indispensable technique for the quantitative determination of sample composition and electronic structure in diverse research fields. Quantitative analysis of the phases present in XP spectra is usually conducted manually by means of empirical peak fitting performed by trained spectroscopists. However, with recent advancements in the usability and reliability of XPS instruments, ever more (inexperienced) users are creating increasingly large data sets that are harder to analyze by hand. In order to aid users with the analysis of large XPS data sets, more automated, easy-to-use analysis techniques are needed. Here, we propose a supervised machine learning framework based on artificial convolutional neural networks. By training such networks on large numbers of artificially created XP spectra with known quantifications (i.e., for each spectrum, the concentration of each chemical species is known), we created universally applicable models for auto-quantification of transition-metal XPS data that are able to predict the sample composition from spectra within seconds. Upon evaluation against more traditional peak fitting methods, we showed that these neural networks achieve competitive quantification accuracy. The proposed framework is shown to be flexible enough to accommodate spectra containing multiple chemical elements and measured with different experimental parameters. The use of dropout variational inference for the determination of quantification uncertainty is illustrated.
Collapse
Affiliation(s)
- Lukas Pielsticker
- Max Planck Institute for Chemical Energy Conversion, Stiftstr. 34-36, 45470, Muelheim an der Ruhr, Germany.
| | - Rachel L Nicholls
- Max Planck Institute for Chemical Energy Conversion, Stiftstr. 34-36, 45470, Muelheim an der Ruhr, Germany
| | - Serena DeBeer
- Max Planck Institute for Chemical Energy Conversion, Stiftstr. 34-36, 45470, Muelheim an der Ruhr, Germany
| | - Mark Greiner
- Max Planck Institute for Chemical Energy Conversion, Stiftstr. 34-36, 45470, Muelheim an der Ruhr, Germany
| |
Collapse
|
249
|
Ogundokun RO, Li A, Babatunde RS, Umezuruike C, Sadiku PO, Abdulahi AT, Babatunde AN. Enhancing Skin Cancer Detection and Classification in Dermoscopic Images through Concatenated MobileNetV2 and Xception Models. Bioengineering (Basel) 2023; 10:979. [PMID: 37627864 PMCID: PMC10451641 DOI: 10.3390/bioengineering10080979] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 08/04/2023] [Accepted: 08/17/2023] [Indexed: 08/27/2023] Open
Abstract
One of the most promising research initiatives in the healthcare field is focused on the rising incidence of skin cancer worldwide and improving early discovery methods for the disease. The most significant factor in the fatalities caused by skin cancer is the late identification of the disease. The likelihood of human survival may be significantly improved by performing an early diagnosis followed by appropriate therapy. It is not a simple process to extract the elements from the photographs of the tumors that may be used for the prospective identification of skin cancer. Several deep learning models are widely used to extract efficient features for a skin cancer diagnosis; nevertheless, the literature demonstrates that there is still room for additional improvements in various performance metrics. This study proposes a hybrid deep convolutional neural network architecture for identifying skin cancer by adding two main heuristics. These include Xception and MobileNetV2 models. Data augmentation was introduced to balance the dataset, and the transfer learning technique was utilized to resolve the challenges of the absence of labeled datasets. It has been detected that the suggested method of employing Xception in conjunction with MobileNetV2 attains the most excellent performance, particularly concerning the dataset that was evaluated: specifically, it produced 97.56% accuracy, 97.00% area under the curve, 100% sensitivity, 93.33% precision, 96.55% F1 score, and 0.0370 false favorable rates. This research has implications for clinical practice and public health, offering a valuable tool for dermatologists and healthcare professionals in their fight against skin cancer.
Collapse
Affiliation(s)
- Roseline Oluwaseun Ogundokun
- Department of Computer Science, Landmark University, Omu Aran 251103, Nigeria
- Department of Multimedia Engineering, Kaunas University of Technology, 44249 Kaunas, Lithuania
| | - Aiman Li
- School of Marxism, Guangzhou University of Chinese Medicine, Guangzhou 510006, China
| | | | | | - Peter O. Sadiku
- Department of Computer Science, University of Ilorin, Ilorin 240003, Nigeria
| | | | | |
Collapse
|
250
|
Yang M, Han J, Park JI, Hwang JS, Han JM, Yoon J, Choi S, Hwang G, Hwang DDJ. Prediction of Visual Acuity in Pathologic Myopia with Myopic Choroidal Neovascularization Treated with Anti-Vascular Endothelial Growth Factor Using a Deep Neural Network Based on Optical Coherence Tomography Images. Biomedicines 2023; 11:2238. [PMID: 37626734 PMCID: PMC10452208 DOI: 10.3390/biomedicines11082238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Revised: 08/03/2023] [Accepted: 08/08/2023] [Indexed: 08/27/2023] Open
Abstract
Myopic choroidal neovascularization (mCNV) is a common cause of vision loss in patients with pathological myopia. However, predicting the visual prognosis of patients with mCNV remains challenging. This study aimed to develop an artificial intelligence (AI) model to predict visual acuity (VA) in patients with mCNV. This study included 279 patients with mCNV at baseline; patient data were collected, including optical coherence tomography (OCT) images, VA, and demographic information. Two models were developed: one comprising horizontal/vertical OCT images (H/V cuts) and the second comprising 25 volume scan images. The coefficient of determination (R2) and root mean square error (RMSE) were computed to evaluate the performance of the trained network. The models achieved high performance in predicting VA after 1 (R2 = 0.911, RMSE = 0.151), 2 (R2 = 0.894, RMSE = 0.254), and 3 (R2 = 0.891, RMSE = 0.227) years. Using multiple-volume scanning, OCT images enhanced the performance of the models relative to using only H/V cuts. This study proposes AI models to predict VA in patients with mCNV. The models achieved high performance by incorporating the baseline VA, OCT images, and post-injection data. This model could assist in predicting the visual prognosis and evaluating treatment outcomes in patients with mCNV undergoing intravitreal anti-vascular endothelial growth factor therapy.
Collapse
Affiliation(s)
- Migyeong Yang
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul 03603, Republic of Korea; (M.Y.); (J.H.); (J.Y.); (S.C.)
| | - Jinyoung Han
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul 03603, Republic of Korea; (M.Y.); (J.H.); (J.Y.); (S.C.)
- Department of Human-Artificial Intelligence Interaction, Sungkyunkwan University, Seoul 03603, Republic of Korea
| | - Ji In Park
- Department of Medicine, Kangwon National University Hospital, Kangwon National University School of Medicine, Chuncheon 24341, Gangwon-do, Republic of Korea;
| | | | - Jeong Mo Han
- Seoul Bombit Eye Clinic, Sejong 30127, Republic of Korea;
| | - Jeewoo Yoon
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul 03603, Republic of Korea; (M.Y.); (J.H.); (J.Y.); (S.C.)
- RAONDATA, Seoul 04615, Republic of Korea
| | - Seong Choi
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul 03603, Republic of Korea; (M.Y.); (J.H.); (J.Y.); (S.C.)
- RAONDATA, Seoul 04615, Republic of Korea
| | - Gyudeok Hwang
- Department of Ophthalmology, Hangil Eye Hospital, Incheon 21388, Republic of Korea;
| | - Daniel Duck-Jin Hwang
- Department of Ophthalmology, Hangil Eye Hospital, Incheon 21388, Republic of Korea;
- Department of Ophthalmology, Catholic Kwandong University College of Medicine, Incheon 22711, Republic of Korea
| |
Collapse
|