101
|
Chekani F, Zhu Z, Khandker RK, Ai J, Meng W, Holler E, Dexter P, Boustani M, Ben Miled Z. Modeling acute care utilization: practical implications for insomnia patients. Sci Rep 2023; 13:2185. [PMID: 36750631 PMCID: PMC9905481 DOI: 10.1038/s41598-023-29366-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 02/03/2023] [Indexed: 02/09/2023] Open
Abstract
Machine learning models can help improve health care services. However, they need to be practical to gain wide-adoption. In this study, we investigate the practical utility of different data modalities and cohort segmentation strategies when designing models for emergency department (ED) and inpatient hospital (IH) visits. The data modalities include socio-demographics, diagnosis and medications. Segmentation compares a cohort of insomnia patients to a cohort of general non-insomnia patients under varying age and disease severity criteria. Transfer testing between the two cohorts is introduced to demonstrate that an insomnia-specific model is not necessary when predicting future ED visits, but may have merit when predicting IH visits especially for patients with an insomnia diagnosis. The results also indicate that using both diagnosis and medications as a source of data does not generally improve model performance and may increase its overhead. Based on these findings, the proposed evaluation methodologies are recommended to ascertain the utility of disease-specific models in addition to the traditional intra-cohort testing.
Collapse
Affiliation(s)
| | - Zitong Zhu
- Computer Science, IUPUI, Indianapolis, IN, 46202, USA
| | | | - Jizhou Ai
- Merck & Co., Inc., Rahway, NJ, 07065, USA
| | | | - Emma Holler
- School of Public Health, Indiana University, Bloomington, IN, 47405, USA
| | - Paul Dexter
- School of Medicine, Indiana University, Indianapolis, IN, 46202, USA.,Regenstrief Institute, Indianapolis, IN, 46202, USA
| | - Malaz Boustani
- School of Medicine, Indiana University, Indianapolis, IN, 46202, USA.,Regenstrief Institute, Indianapolis, IN, 46202, USA
| | - Zina Ben Miled
- Regenstrief Institute, Indianapolis, IN, 46202, USA. .,Electrical and Computer Engineering, IUPUI, Indianapolis, IN, 46202, USA.
| |
Collapse
|
102
|
De Santi LA, Pasini E, Santarelli MF, Genovesi D, Positano V. An Explainable Convolutional Neural Network for the Early Diagnosis of Alzheimer's Disease from 18F-FDG PET. J Digit Imaging 2023; 36:189-203. [PMID: 36344633 PMCID: PMC9984631 DOI: 10.1007/s10278-022-00719-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 09/26/2022] [Accepted: 10/17/2022] [Indexed: 11/09/2022] Open
Abstract
Convolutional Neural Networks (CNN) which support the diagnosis of Alzheimer's Disease using 18F-FDG PET images are obtaining promising results; however, one of the main challenges in this domain is the fact that these models work as black-box systems. We developed a CNN that performs a multiclass classification task of volumetric 18F-FDG PET images, and we experimented two different post hoc explanation techniques developed in the field of Explainable Artificial Intelligence: Saliency Map (SM) and Layerwise Relevance Propagation (LRP). Finally, we quantitatively analyze the explanations returned and inspect their relationship with the PET signal. We collected 2552 scans from the Alzheimer's Disease Neuroimaging Initiative labeled as Cognitively Normal (CN), Mild Cognitive Impairment (MCI), and Alzheimer's Disease (AD) and we developed and tested a 3D CNN that classifies the 3D PET scans into its final clinical diagnosis. The model developed achieves, to the best of our knowledge, performances comparable with the relevant literature on the test set, with an average Area Under the Curve (AUC) for prediction of CN, MCI, and AD 0.81, 0.63, and 0.77 respectively. We registered the heatmaps with the Talairach Atlas to perform a regional quantitative analysis of the relationship between heatmaps and PET signals. With the quantitative analysis of the post hoc explanation techniques, we observed that LRP maps were more effective in mapping the importance metrics in the anatomic atlas. No clear relationship was found between the heatmap and the PET signal.
Collapse
Affiliation(s)
| | - Elena Pasini
- CNR Institute of Clinical Physiology, Pisa, Italy
| | | | - Dario Genovesi
- Nuclear Medicine Unit - Fondazione G. Monasterio CNR - Regione Toscana, Pisa, Italy
| | - Vincenzo Positano
- Bioengineering Unit - Fondazione G. Monasterio CNR - Regione Toscana, Via Giuseppe Moruzzi, 1, 56124 Pisa, Italy
| |
Collapse
|
103
|
Du J, Guan K, Liu P, Li Y, Wang T. Boundary-Sensitive Loss Function With Location Constraint for Hard Region Segmentation. IEEE J Biomed Health Inform 2023; 27:992-1003. [PMID: 36378793 DOI: 10.1109/jbhi.2022.3222390] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In computer-aided diagnosis and treatment planning, accurate segmentation of medical images plays an essential role, especially for some hard regions including boundaries, small objects and background interference. However, existing segmentation loss functions including distribution-, region- and boundary-based losses cannot achieve satisfactory performances on these hard regions. In this paper, a boundary-sensitive loss function with location constraint is proposed for hard region segmentation in medical images, which provides three advantages: i) our Boundary-Sensitive loss (BS-loss) can automatically pay more attention to the hard-to-segment boundaries (e.g., thin structures and blurred boundaries), thus obtaining finer object boundaries; ii) BS-loss also can adjust its attention to small objects during training to segment them more accurately; and iii) our location constraint can alleviate the negative impact of the background interference, through the distribution matching of pixels between prediction and Ground Truth (GT) along each axis. By resorting to the proposed BS-loss and location constraint, the hard regions in both foreground and background are considered. Experimental results on three public datasets demonstrate the superiority of our method. Specifically, compared to the second-best method tested in this study, our method improves performance on hard regions in terms of Dice similarity coefficient (DSC) and 95% Hausdorff distance (95%HD) of up to 4.17% and 73% respectively. In addition, it also achieves the best overall segmentation performance. Hence, we can conclude that our method can accurately segment these hard regions and improve the overall segmentation performance in medical images.
Collapse
|
104
|
LCSCNet: A multi-level approach for lung cancer stage classification using 3D dense convolutional neural networks with concurrent squeeze-and-excitation module. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
105
|
Schraut JX, Liu L, Gong J, Yin Y. A multi-output network with U-net enhanced class activation map and robust classification performance for medical imaging analysis. DISCOVER ARTIFICIAL INTELLIGENCE 2023. [DOI: 10.1007/s44163-022-00045-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
AbstractComputer vision in medical diagnosis has achieved a high level of success in diagnosing diseases with high accuracy. However, conventional classifiers that produce an image-to-label result provide insufficient information for medical professionals to judge and raise concerns over the trust and reliability of a model with results that cannot be explained. To gain local insight of cancerous regions, separate tasks such as imaging segmentation needs to be implemented to aid the doctors in treating patients which doubles the training time and costs which renders the diagnosis system inefficient and difficult to be accepted by the public. To tackle this issue and drive the AI-first medical solutions further, this paper proposes a multi-output network which follows a U-Net architecture for image segmentation output and features an additional CNN module for auxiliary classification output. Class Activation Maps or CAMs are a method of providing insight into a convolutional neural network’s feature maps that lead to its classification but in the case of lung diseases, the region of interest is enhanced by U-net assisted Class Activation Mapping (CAM) visualization. Therefore, our proposed model combines image segmentation models and classifiers to crop out only the lung region of a chest X-ray’s class activation map to provide a visualization that improves the explainability and can generate classification results simultaneously which builds trust for AI-led diagnosis system. The proposed U-Net model achieves 97.72% accuracy and a dice coefficient of 0.9691 on a testing data from the COVID-QU-Ex Dataset which includes both diseased and healthy lungs.
Collapse
|
106
|
Thanathornwong B, Suebnukarn S, Ouivirach K. Clinical Decision Support System for Geriatric Dental Treatment Using a Bayesian Network and a Convolutional Neural Network. Healthc Inform Res 2023; 29:23-30. [PMID: 36792098 PMCID: PMC9932303 DOI: 10.4258/hir.2023.29.1.23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 10/18/2022] [Accepted: 10/30/2022] [Indexed: 02/10/2023] Open
Abstract
OBJECTIVES The aim of this study was to evaluate the performance of a clinical decision support system (CDSS) for therapeutic plans in geriatric dentistry. The information that needs to be considered in a therapeutic plan includes not only the patient's oral health status obtained from an oral examination, but also other related factors such as underlying diseases, socioeconomic characteristics, and functional dependency. METHODS A Bayesian network (BN) was used as a framework to construct a model of contributing factors and their causal relationships based on clinical knowledge and data. The faster R-CNN (regional convolutional neural network) algorithm was used to detect oral health status, which was part of the BN structure. The study was conducted using retrospective data from 400 patients receiving geriatric dental care at a university hospital between January 2020 and June 2021. RESULTS The model showed an F1-score of 89.31%, precision of 86.69%, and recall of 82.14% for the detection of periodontally compromised teeth. A receiver operating characteristic curve analysis showed that the BN model was highly accurate for recommending therapeutic plans (area under the curve = 0.902). The model performance was compared to that of experts in geriatric dentistry, and the experts and the system strongly agreed on the recommended therapeutic plans (kappa value = 0.905). CONCLUSIONS This research was the first phase of the development of a CDSS to recommend geriatric dental treatment. The proposed system, when integrated into the clinical workflow, is expected to provide general practitioners with expert-level decision support in geriatric dental care.
Collapse
|
107
|
Kimmerle J, Timm J, Festl-Wietek T, Cress U, Herrmann-Werner A. Medical Students' Attitudes Toward AI in Medicine and their Expectations for Medical Education. JOURNAL OF MEDICAL EDUCATION AND CURRICULAR DEVELOPMENT 2023; 10:23821205231219346. [PMID: 38075443 PMCID: PMC10704950 DOI: 10.1177/23821205231219346] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 11/17/2023] [Indexed: 01/01/2025]
Abstract
OBJECTIVES Artificial intelligence (AI) is used in a variety of contexts in medicine. This involves the use of algorithms and software that analyze digital information to make diagnoses and suggest adapted therapies. It is unclear, however, what medical students know about AI in medicine, how they evaluate its application, and what they expect from their medical training accordingly. In the study presented here, we aimed at providing answers to these questions. METHODS In this survey study, we asked medical students about their assessment of AI in medicine and recorded their ideas and suggestions for considering this topic in medical education. Fifty-eight medical students completed the survey. RESULTS Almost all participants were aware of the use of AI in medicine and had an adequate understanding of it. They perceived AI in medicine to be reliable, trustworthy, and technically competent, but did not have much faith in it. They considered AI in medicine to be rather intelligent but not anthropomorphic. Participants were interested in the opportunities of AI in the medical context and wanted to learn more about it. They indicated that basic AI knowledge should be taught in medical studies, in particular, knowledge about modes of operation, ethics, areas of application, reliability, and possible risks. CONCLUSIONS We discuss the implications of these findings for the curricular development in medical education. Medical students need to be equipped with the knowledge and skills to use AI effectively and ethically in their future practice. This includes understanding the limitations and potential biases of AI algorithms by teaching the sensible use of human oversight and continuous monitoring to catch errors in AI algorithms and ensure that final decisions are made by human clinicians.
Collapse
Affiliation(s)
- Joachim Kimmerle
- Knowledge Construction Lab, Leibniz-Institut fuer Wissensmedien, Tuebingen, Germany
- Department of Psychology, University of Tuebingen, Tuebingen, Germany
| | - Jasmin Timm
- Knowledge Construction Lab, Leibniz-Institut fuer Wissensmedien, Tuebingen, Germany
| | - Teresa Festl-Wietek
- Tuebingen Institute for Medical Education, University of Tuebingen, Tuebingen, Germany
| | - Ulrike Cress
- Knowledge Construction Lab, Leibniz-Institut fuer Wissensmedien, Tuebingen, Germany
- Department of Psychology, University of Tuebingen, Tuebingen, Germany
| | - Anne Herrmann-Werner
- Tuebingen Institute for Medical Education, University of Tuebingen, Tuebingen, Germany
| |
Collapse
|
108
|
Scherer L, Kuss M, Nahm W. Review of Artificial Intelligence-Based Signal Processing in Dialysis: Challenges for Machine-Embedded and Complementary Applications. ADVANCES IN KIDNEY DISEASE AND HEALTH 2023; 30:40-46. [PMID: 36723281 DOI: 10.1053/j.akdh.2022.11.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 08/23/2022] [Accepted: 11/07/2022] [Indexed: 01/20/2023]
Abstract
Artificial intelligence technology is trending in nearly every medical area. It offers the possibility for improving analytics, therapy outcome, and user experience during therapy. In dialysis, the application of artificial intelligence as a therapy-individualization tool is led more by start-ups than consolidated players, and innovation in dialysis seems comparably stagnant. Factors such as technical requirements or regulatory processes are important and necessary but can slow down the implementation of artificial intelligence due to missing data infrastructure and undefined approval processes. Current research focuses mainly on analyzing health records or wearable technology to add to existing health data. It barely uses signal data from treatment devices to apply artificial intelligence models. This article, therefore, discusses requirements for signal processing through artificial intelligence in health care and compares these with the status quo in dialysis therapy. It offers solutions for given barriers to speed up innovation with sensor data, opening access to existing and untapped sources, and shows the unique advantage of signal processing in dialysis compared to other health care domains. This research shows that even though the combination of different data is vital for improving patients' therapy, adding signal-based treatment data from dialysis devices to the picture can benefit the understanding of treatment dynamics, improving and individualizing therapy.
Collapse
Affiliation(s)
- Lena Scherer
- Karlsruhe Institute of Technology, Karlsruhe, Germany.
| | | | - Werner Nahm
- Karlsruhe Institute of Technology, Karlsruhe, Germany
| |
Collapse
|
109
|
He X, Liu X, Zuo F, Shi H, Jing J. Artificial intelligence-based multi-omics analysis fuels cancer precision medicine. Semin Cancer Biol 2023; 88:187-200. [PMID: 36596352 DOI: 10.1016/j.semcancer.2022.12.009] [Citation(s) in RCA: 102] [Impact Index Per Article: 51.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 12/16/2022] [Accepted: 12/29/2022] [Indexed: 01/02/2023]
Abstract
With biotechnological advancements, innovative omics technologies are constantly emerging that have enabled researchers to access multi-layer information from the genome, epigenome, transcriptome, proteome, metabolome, and more. A wealth of omics technologies, including bulk and single-cell omics approaches, have empowered to characterize different molecular layers at unprecedented scale and resolution, providing a holistic view of tumor behavior. Multi-omics analysis allows systematic interrogation of various molecular information at each biological layer while posing tricky challenges regarding how to extract valuable insights from the exponentially increasing amount of multi-omics data. Therefore, efficient algorithms are needed to reduce the dimensionality of the data while simultaneously dissecting the mysteries behind the complex biological processes of cancer. Artificial intelligence has demonstrated the ability to analyze complementary multi-modal data streams within the oncology realm. The coincident development of multi-omics technologies and artificial intelligence algorithms has fuelled the development of cancer precision medicine. Here, we present state-of-the-art omics technologies and outline a roadmap of multi-omics integration analysis using an artificial intelligence strategy. The advances made using artificial intelligence-based multi-omics approaches are described, especially concerning early cancer screening, diagnosis, response assessment, and prognosis prediction. Finally, we discuss the challenges faced in multi-omics analysis, along with tentative future trends in this field. With the increasing application of artificial intelligence in multi-omics analysis, we anticipate a shifting paradigm in precision medicine becoming driven by artificial intelligence-based multi-omics technologies.
Collapse
Affiliation(s)
- Xiujing He
- Laboratory of Integrative Medicine, Clinical Research Center for Breast, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan, PR China
| | - Xiaowei Liu
- Laboratory of Integrative Medicine, Clinical Research Center for Breast, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan, PR China
| | - Fengli Zuo
- Laboratory of Integrative Medicine, Clinical Research Center for Breast, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan, PR China
| | - Hubing Shi
- Laboratory of Integrative Medicine, Clinical Research Center for Breast, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan, PR China
| | - Jing Jing
- Laboratory of Integrative Medicine, Clinical Research Center for Breast, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan, PR China.
| |
Collapse
|
110
|
A systematic review on the potential use of machine learning to classify major depressive disorder from healthy controls using resting state fMRI measures. Neurosci Biobehav Rev 2023; 144:104972. [PMID: 36436736 DOI: 10.1016/j.neubiorev.2022.104972] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 09/08/2022] [Accepted: 11/21/2022] [Indexed: 11/27/2022]
Abstract
BACKGROUND Major Depressive Disorder (MDD) is a psychiatric disorder characterized by functional brain deficits, as documented by resting-state functional magnetic resonance imaging (rs-fMRI) studies. AIMS In recent years, some studies used machine learning (ML) approaches, based on rs-fMRI features, for classifying MDD from healthy controls (HC). In this context, this review aims to provide a comprehensive overview of the results of these studies. DESIGN The studies research was performed on 3 online databases, examining English-written articles published before August 5, 2022, that performed a two-class ML classification using rs-fMRI features. The search resulted in 20 eligible studies. RESULTS The reviewed studies showed good performance metrics, with better performance achieved when the dataset was restricted to a more homogeneous group in terms of disease severity. Regions within the default mode network, salience network, and central executive network were reported as the most important features in the classification algorithms. LIMITATIONS The small sample size together with the methodological and clinical heterogeneity limited the generalizability of the findings. CONCLUSIONS In conclusion, ML applied to rs-fMRI features can be a valid approach to classify MDD and HC subjects and to discover features that can be used for additional investigation of the pathophysiology of the disease.
Collapse
|
111
|
Wang L, Ding N, Zuo P, Wang X, Rai BK. Application and Challenges of Artificial Intelligence in Medical Imaging. 2022 INTERNATIONAL CONFERENCE ON KNOWLEDGE ENGINEERING AND COMMUNICATION SYSTEMS (ICKES) 2022:1-6. [DOI: 10.1109/ickecs56523.2022.10059898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
Affiliation(s)
- Lingyu Wang
- School of Health Care Technology, Dalian Neusoft University of Information,Dalian,Liaoning,China
| | - Ning Ding
- School of Health Care Technology, Dalian Neusoft University of Information,Dalian,Liaoning,China
| | - Pengfei Zuo
- School of Health Care Technology, Dalian Neusoft University of Information,Dalian,Liaoning,China
| | - Xuenan Wang
- School of Health Care Technology, Dalian Neusoft University of Information,Dalian,Liaoning,China
| | - B Karunakara Rai
- Nitte Meenakshi Institute of Technology,Department of Electronics and Communication Engineering,Bengaluru,India
| |
Collapse
|
112
|
Liu YY, Huang ZH, Huang KW. Deep Learning Model for Computer-Aided Diagnosis of Urolithiasis Detection from Kidney-Ureter-Bladder Images. Bioengineering (Basel) 2022; 9:811. [PMID: 36551017 PMCID: PMC9774756 DOI: 10.3390/bioengineering9120811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 12/08/2022] [Accepted: 12/12/2022] [Indexed: 12/24/2022] Open
Abstract
Kidney-ureter-bladder (KUB) imaging is a radiological examination with a low cost, low radiation, and convenience. Although emergency room clinicians can arrange KUB images easily as a first-line examination for patients with suspicious urolithiasis, interpreting the KUB images correctly is difficult for inexperienced clinicians. Obtaining a formal radiology report immediately after a KUB imaging examination can also be challenging. Recently, artificial-intelligence-based computer-aided diagnosis (CAD) systems have been developed to help clinicians who are not experts make correct diagnoses for further treatment more effectively. Therefore, in this study, we proposed a CAD system for KUB imaging based on a deep learning model designed to help first-line emergency room clinicians diagnose urolithiasis accurately. A total of 355 KUB images were retrospectively collected from 104 patients who were diagnosed with urolithiasis at Kaohsiung Chang Gung Memorial Hospital. Then, we trained a deep learning model with a ResNet architecture to classify KUB images in terms of the presence or absence of kidney stones with this dataset of pre-processed images. Finally, we tuned the parameters and tested the model experimentally. The results show that the accuracy, sensitivity, specificity, and F1-measure of the model were 0.977, 0.953, 1, and 0.976 on the validation set and 0.982, 0.964, 1, and 0.982 on the testing set, respectively. Moreover, the results demonstrate that the proposed model performed well compared to the existing CNN-based methods and was able to detect urolithiasis in KUB images successfully. We expect the proposed approach to help emergency room clinicians make accurate diagnoses and reduce unnecessary radiation exposure from computed tomography (CT) scans, along with the associated medical costs.
Collapse
Affiliation(s)
- Yi-Yang Liu
- Department of Electrical Engineering, National Kaohsiung University of Science and Technology, Kaohsiung City 80778, Taiwan
- Department of Urology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Kaohsiung City 83301, Taiwan
| | - Zih-Hao Huang
- Department of Electrical Engineering, National Kaohsiung University of Science and Technology, Kaohsiung City 80778, Taiwan
| | - Ko-Wei Huang
- Department of Electrical Engineering, National Kaohsiung University of Science and Technology, Kaohsiung City 80778, Taiwan
| |
Collapse
|
113
|
Artificial Intelligence (AI) in Breast Imaging: A Scientometric Umbrella Review. Diagnostics (Basel) 2022; 12:diagnostics12123111. [PMID: 36553119 PMCID: PMC9777253 DOI: 10.3390/diagnostics12123111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/07/2022] [Accepted: 12/08/2022] [Indexed: 12/14/2022] Open
Abstract
Artificial intelligence (AI), a rousing advancement disrupting a wide spectrum of applications with remarkable betterment, has continued to gain momentum over the past decades. Within breast imaging, AI, especially machine learning and deep learning, honed with unlimited cross-data/case referencing, has found great utility encompassing four facets: screening and detection, diagnosis, disease monitoring, and data management as a whole. Over the years, breast cancer has been the apex of the cancer cumulative risk ranking for women across the six continents, existing in variegated forms and offering a complicated context in medical decisions. Realizing the ever-increasing demand for quality healthcare, contemporary AI has been envisioned to make great strides in clinical data management and perception, with the capability to detect indeterminate significance, predict prognostication, and correlate available data into a meaningful clinical endpoint. Here, the authors captured the review works over the past decades, focusing on AI in breast imaging, and systematized the included works into one usable document, which is termed an umbrella review. The present study aims to provide a panoramic view of how AI is poised to enhance breast imaging procedures. Evidence-based scientometric analysis was performed in accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guideline, resulting in 71 included review works. This study aims to synthesize, collate, and correlate the included review works, thereby identifying the patterns, trends, quality, and types of the included works, captured by the structured search strategy. The present study is intended to serve as a "one-stop center" synthesis and provide a holistic bird's eye view to readers, ranging from newcomers to existing researchers and relevant stakeholders, on the topic of interest.
Collapse
|
114
|
Zan P, Zhong H, Zhao Y, Xu H, Hong R, Ding Q, Yue J. Research on improved intestinal image classification for LARS based on ResNet. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2022; 93:124101. [PMID: 36586901 DOI: 10.1063/5.0100192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 11/13/2022] [Indexed: 06/17/2023]
Abstract
Low anterior rectal resection is an effective way to treat rectal cancer at present, but it is easy to cause low anterior resection syndrome after surgery; so, a comprehensive diagnosis of defecation and pelvic floor function must be carried out. There are few studies on the classification of diagnoses in the field of intestinal diseases. In response to these outstanding problems, this research will focus on the design of the intestinal function diagnosis system and the image processing and classification algorithm of the intestinal wall to verify an efficient fusion method, which can be used to diagnose the intestinal diseases in clinical medicine. The diagnostic system designed in this paper makes up for the singleness of clinical monitoring methods. At the same time, the Res-SVDNet neural network model is used to solve the problems of small intestinal image samples and network overfitting, and achieve efficient fusion diagnosis of intestinal diseases in patients. Different models were used to compare experiments on the constructed datasets to verify the applicability of the Res-SVDNet model in intestinal image classification. The accuracy of the model was 99.54%, which is several percentage points higher than other algorithm models.
Collapse
Affiliation(s)
- Peng Zan
- Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronics Engineering and Automation, Shanghai University, Shanghai 200444, China
| | - Hua Zhong
- Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronics Engineering and Automation, Shanghai University, Shanghai 200444, China
| | - Yutong Zhao
- Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronics Engineering and Automation, Shanghai University, Shanghai 200444, China
| | - Huiyan Xu
- Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronics Engineering and Automation, Shanghai University, Shanghai 200444, China
| | - Rui Hong
- Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronics Engineering and Automation, Shanghai University, Shanghai 200444, China
| | - Qiao Ding
- Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronics Engineering and Automation, Shanghai University, Shanghai 200444, China
| | - Jingwei Yue
- Beijing Institute of Radiation Medicine, Beijing 100850, China
| |
Collapse
|
115
|
Attique Khan M, Sharif M, Akram T, Kadry S, Hsu C. A two‐stream deep neural network‐based intelligent system for complex skin cancer types classification. INT J INTELL SYST 2022; 37:10621-10649. [DOI: 10.1002/int.22691] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Accepted: 09/01/2021] [Indexed: 01/25/2023]
Affiliation(s)
- Muhammad Attique Khan
- Department of Computer Science COMSATS University Islamabad Wah Campus Wah Cantt Pakistan
| | - Muhammad Sharif
- Department of Computer Science COMSATS University Islamabad Wah Campus Wah Cantt Pakistan
| | - Tallha Akram
- Department of Electrical and Computer Engineering COMSATS University Islamabad Wah Campus Wah Cantt Pakistan
| | - Seifedine Kadry
- Faculty of Applied Computing and Technology Noroff University College Kristiansand Norway
| | - Ching‐Hsien Hsu
- Guangdong‐Hong Kong‐Macao Joint Laboratory for Intelligent Micro‐Nano Optoelectronic Technology, School of Mathematics and Big Data Foshan University Foshan China
- Department of Computer Science and Information Engineering Asia University Taichung Taiwan
- Department of Medical Research China Medical University Hospital China Medical University Taichung Taiwan
| |
Collapse
|
116
|
Khan MF, Kumar RNS, Patil T, Reddy A, Mane V, Santhoshkumar S. Neural Network Optimized Medical Image Classification with a Deep Comparison. 2022 INTERNATIONAL CONFERENCE ON AUGMENTED INTELLIGENCE AND SUSTAINABLE SYSTEMS (ICAISS) 2022:11-15. [DOI: 10.1109/icaiss55157.2022.10011109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Affiliation(s)
- Mohd Faizaan Khan
- SRM Institute of Science and Technology,Department of Information Technolgy,Chennai,India
| | | | - Tanishka Patil
- K. K. Wagh Institute,Department of CSE,Nashik,Maharashtra,India
| | - Apoorva Reddy
- Vellore Institute of Technology,Department of CSE,Vellore,India
| | - Vinayak Mane
- College of Engineering Pune,Department of CSE,Pune,India
| | | |
Collapse
|
117
|
Jalalifar SA, Soliman H, Sahgal A, Sadeghi-Naini A. A Self-Attention-Guided 3D Deep Residual Network With Big Transfer to Predict Local Failure in Brain Metastasis After Radiotherapy Using Multi-Channel MRI. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 11:13-22. [PMID: 36478770 PMCID: PMC9721353 DOI: 10.1109/jtehm.2022.3219625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 10/15/2022] [Accepted: 11/02/2022] [Indexed: 11/06/2022]
Abstract
A noticeable proportion of larger brain metastases (BMs) are not locally controlled after stereotactic radiotherapy, and it may take months before local progression is apparent on standard follow-up imaging. This work proposes and investigates new explainable deep-learning models to predict the radiotherapy outcome for BM. A novel self-attention-guided 3D residual network is introduced for predicting the outcome of local failure (LF) after radiotherapy using the baseline treatment-planning MRI. The 3D self-attention modules facilitate capturing long-range intra/inter slice dependencies which are often overlooked by convolution layers. The proposed model was compared to a vanilla 3D residual network and 3D residual network with CBAM attention in terms of performance in outcome prediction. A training recipe was adapted for the outcome prediction models during pretraining and training the down-stream task based on the recently proposed big transfer principles. A novel 3D visualization module was coupled with the model to demonstrate the impact of various intra/peri-lesion regions on volumetric multi-channel MRI upon the network's prediction. The proposed self-attention-guided 3D residual network outperforms the vanilla residual network and the residual network with CBAM attention in accuracy, F1-score, and AUC. The visualization results show the importance of peri-lesional characteristics on treatment-planning MRI in predicting local outcome after radiotherapy. This study demonstrates the potential of self-attention-guided deep-learning features derived from volumetric MRI in radiotherapy outcome prediction for BM. The insights obtained via the developed visualization module for individual lesions can possibly be applied during radiotherapy planning to decrease the chance of LF.
Collapse
Affiliation(s)
- Seyed Ali Jalalifar
- Department of Electrical Engineering and Computer ScienceLassonde School of EngineeringYork University Toronto ON M3J 1P3 Canada
| | - Hany Soliman
- Physical Sciences PlatformSunnybrook Research Institute, Sunnybrook Health Sciences Centre Toronto ON M4N 3M5 Canada
- Department of Radiation OncologyOdette Cancer CentreSunnybrook Health Sciences Centre Toronto ON M4N 3M5 Canada
- Department of Radiation OncologyUniversity of Toronto Toronto ON M5T 1P5 Canada
| | - Arjun Sahgal
- Physical Sciences PlatformSunnybrook Research Institute, Sunnybrook Health Sciences Centre Toronto ON M4N 3M5 Canada
- Department of Radiation OncologyOdette Cancer CentreSunnybrook Health Sciences Centre Toronto ON M4N 3M5 Canada
- Department of Radiation OncologyUniversity of Toronto Toronto ON M5T 1P5 Canada
| | - Ali Sadeghi-Naini
- Department of Electrical Engineering and Computer ScienceLassonde School of EngineeringYork University Toronto ON M3J 1P3 Canada
- Physical Sciences PlatformSunnybrook Research Institute, Sunnybrook Health Sciences Centre Toronto ON M4N 3M5 Canada
- Department of Radiation OncologyOdette Cancer CentreSunnybrook Health Sciences Centre Toronto ON M4N 3M5 Canada
| |
Collapse
|
118
|
Arooj S, Rehman SU, Imran A, Almuhaimeed A, Alzahrani AK, Alzahrani A. A Deep Convolutional Neural Network for the Early Detection of Heart Disease. Biomedicines 2022; 10:2796. [PMID: 36359317 PMCID: PMC9687844 DOI: 10.3390/biomedicines10112796] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 10/26/2022] [Accepted: 10/29/2022] [Indexed: 08/08/2023] Open
Abstract
Heart disease is one of the key contributors to human death. Each year, several people die due to this disease. According to the WHO, 17.9 million people die each year due to heart disease. With the various technologies and techniques developed for heart-disease detection, the use of image classification can further improve the results. Image classification is a significant matter of concern in modern times. It is one of the most basic jobs in pattern identification and computer vision, and refers to assigning one or more labels to images. Pattern identification from images has become easier by using machine learning, and deep learning has rendered it more precise than traditional image classification methods. This study aims to use a deep-learning approach using image classification for heart-disease detection. A deep convolutional neural network (DCNN) is currently the most popular classification technique for image recognition. The proposed model is evaluated on the public UCI heart-disease dataset comprising 1050 patients and 14 attributes. By gathering a set of directly obtainable features from the heart-disease dataset, we considered this feature vector to be input for a DCNN to discriminate whether an instance belongs to a healthy or cardiac disease class. To assess the performance of the proposed method, different performance metrics, namely, accuracy, precision, recall, and the F1 measure, were employed, and our model achieved validation accuracy of 91.7%. The experimental results indicate the effectiveness of the proposed approach in a real-world environment.
Collapse
Affiliation(s)
- Sadia Arooj
- University Institute of Information Technology, PMAS-Arid Agriculture University, Rawalpindi 46000, Pakistan
| | - Saif ur Rehman
- University Institute of Information Technology, PMAS-Arid Agriculture University, Rawalpindi 46000, Pakistan
| | - Azhar Imran
- Department of Creative Technologies, Faculty of Computing & Artificial Intelligence, Air University, Islamabad 42000, Pakistan
| | - Abdullah Almuhaimeed
- The National Centre for Genomics Technologies and Bioinformatics, King Abdulaziz City for Science and Technology, Riyadh 11442, Saudi Arabia
| | - A. Khuzaim Alzahrani
- Faculty of Applied Medical Sciences, Northern Border University, Arar 91431, Saudi Arabia
| | - Abdulkareem Alzahrani
- Faculty of Computer Science and Information Technology, Al Baha University, Al Baha 65779, Saudi Arabia
| |
Collapse
|
119
|
Bucharskaya AB, Yanina IY, Atsigeida SV, Genin VD, Lazareva EN, Navolokin NA, Dyachenko PA, Tuchina DK, Tuchina ES, Genina EA, Kistenev YV, Tuchin VV. Optical clearing and testing of lung tissue using inhalation aerosols: prospects for monitoring the action of viral infections. Biophys Rev 2022; 14:1005-1022. [PMID: 36042751 PMCID: PMC9415257 DOI: 10.1007/s12551-022-00991-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Accepted: 08/03/2022] [Indexed: 02/06/2023] Open
Abstract
Optical clearing of the lung tissue aims to make it more transparent to light by minimizing light scattering, thus allowing reconstruction of the three-dimensional structure of the tissue with a much better resolution. This is of great importance for monitoring of viral infection impact on the alveolar structure of the tissue and oxygen transport. Optical clearing agents (OCAs) can provide not only lesser light scattering of tissue components but also may influence the molecular transport function of the alveolar membrane. Air-filled lungs present significant challenges for optical imaging including optical coherence tomography (OCT), confocal and two-photon microscopy, and Raman spectroscopy, because of the large refractive-index mismatch between alveoli walls and the enclosed air-filled region. During OCT imaging, the light is strongly backscattered at each air–tissue interface, such that image reconstruction is typically limited to a single alveolus. At the same time, the filling of these cavities with an OCA, to which water (physiological solution) can also be attributed since its refractive index is much higher than that of air will lead to much better tissue optical transmittance. This review presents general principles and advances in the field of tissue optical clearing (TOC) technology, OCA delivery mechanisms in lung tissue, studies of the impact of microbial and viral infections on tissue response, and antimicrobial and antiviral photodynamic therapies using methylene blue (MB) and indocyanine green (ICG) dyes as photosensitizers.
Collapse
Affiliation(s)
- Alla B. Bucharskaya
- Centre of Collective Use, Saratov State Medical University n.a. V.I. Razumovsky, 112 B. Kazach’ya, Saratov, 410012 Russia
- Science Medical Center, Saratov State University, 83 Astrakhanskaya St, Saratov, 410012 Russia
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, 36 Lenin’s Av, Tomsk, 634050 Russia
| | - Irina Yu. Yanina
- Science Medical Center, Saratov State University, 83 Astrakhanskaya St, Saratov, 410012 Russia
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, 36 Lenin’s Av, Tomsk, 634050 Russia
| | - Sofia V. Atsigeida
- Science Medical Center, Saratov State University, 83 Astrakhanskaya St, Saratov, 410012 Russia
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, 36 Lenin’s Av, Tomsk, 634050 Russia
| | - Vadim D. Genin
- Science Medical Center, Saratov State University, 83 Astrakhanskaya St, Saratov, 410012 Russia
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, 36 Lenin’s Av, Tomsk, 634050 Russia
| | - Ekaterina N. Lazareva
- Science Medical Center, Saratov State University, 83 Astrakhanskaya St, Saratov, 410012 Russia
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, 36 Lenin’s Av, Tomsk, 634050 Russia
| | - Nikita A. Navolokin
- Centre of Collective Use, Saratov State Medical University n.a. V.I. Razumovsky, 112 B. Kazach’ya, Saratov, 410012 Russia
- Science Medical Center, Saratov State University, 83 Astrakhanskaya St, Saratov, 410012 Russia
| | - Polina A. Dyachenko
- Science Medical Center, Saratov State University, 83 Astrakhanskaya St, Saratov, 410012 Russia
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, 36 Lenin’s Av, Tomsk, 634050 Russia
| | - Daria K. Tuchina
- Science Medical Center, Saratov State University, 83 Astrakhanskaya St, Saratov, 410012 Russia
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, 36 Lenin’s Av, Tomsk, 634050 Russia
| | - Elena S. Tuchina
- Department of Biology, Saratov State University, 83 Astrakhanskaya St, Saratov, 410012 Russia
| | - Elina A. Genina
- Science Medical Center, Saratov State University, 83 Astrakhanskaya St, Saratov, 410012 Russia
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, 36 Lenin’s Av, Tomsk, 634050 Russia
| | - Yury V. Kistenev
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, 36 Lenin’s Av, Tomsk, 634050 Russia
| | - Valery V. Tuchin
- Science Medical Center, Saratov State University, 83 Astrakhanskaya St, Saratov, 410012 Russia
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, 36 Lenin’s Av, Tomsk, 634050 Russia
- Laboratory of Laser Diagnostics of Technical and Living Systems, Institute of Precision Mechanics and Control, FRC “Saratov Scientific Centre of the Russian Academy of Sciences”, 24 Rabochaya St, Saratov, 410028 Russia
- A.N. Bach Institute of Biochemistry, FRC “Fundamentals of Biotechnology” of the Russian Academy of Sciences, 33-2 Leninsky Av, Moscow, 119991 Russia
| |
Collapse
|
120
|
A Deep Learning Model Incorporating Knowledge Representation Vectors and Its Application in Diabetes Prediction. DISEASE MARKERS 2022; 2022:7593750. [PMID: 35990251 PMCID: PMC9391170 DOI: 10.1155/2022/7593750] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 07/24/2022] [Accepted: 07/30/2022] [Indexed: 01/09/2023]
Abstract
The deep learning methods for various disease prediction tasks have become very effective and even surpass human experts. However, the lack of interpretability and medical expertise limits its clinical application. This paper combines knowledge representation learning and deep learning methods, and a disease prediction model is constructed. The model initially constructs the relationship graph between the physical indicator and the test value based on the normal range of human physical examination index. And the human physical examination index for testing value by knowledge representation learning model is encoded. Then, the patient physical examination data is represented as a vector and input into a deep learning model built with self-attention mechanism and convolutional neural network to implement disease prediction. The experimental results show that the model which is used in diabetes prediction yields an accuracy of 97.18% and the recall of 87.55%, which outperforms other machine learning methods (e.g., lasso, ridge, support vector machine, random forest, and XGBoost). Compared with the best performing random forest method, the recall is increased by 5.34%, respectively. Therefore, it can be concluded that the application of medical knowledge into deep learning through knowledge representation learning can be used in diabetes prediction for the purpose of early detection and assisting diagnosis.
Collapse
|
121
|
Bhatia S, Alojail M, Sengan S, Dadheech P. An efficient modular framework for automatic LIONC classification of MedIMG using unified medical language. Front Public Health 2022; 10:926229. [PMID: 36033768 PMCID: PMC9399779 DOI: 10.3389/fpubh.2022.926229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 06/27/2022] [Indexed: 01/24/2023] Open
Abstract
Handwritten prescriptions and radiological reports: doctors use handwritten prescriptions and radiological reports to give drugs to patients who have illnesses, injuries, or other problems. Clinical text data, like physician prescription visuals and radiology reports, should be labelled with specific information such as disease type, features, and anatomical location for more effective use. The semantic annotation of vast collections of biological and biomedical texts, like scientific papers, medical reports, and general practitioner observations, has lately been examined by doctors and scientists. By identifying and disambiguating references to biomedical concepts in texts, medical semantics annotators could generate such annotations automatically. For Medical Images (MedIMG), we provide a methodology for learning an effective holistic representation (handwritten word pictures as well as radiology reports). Deep Learning (DL) methods have recently gained much interest for their capacity to achieve expert-level accuracy in automated MedIMG analysis. We discovered that tasks requiring significant responsive fields are ideal for downscaled input images that are qualitatively verified by examining functional, responsive areas and class activating maps for training models. This article focuses on the following contributions: (a) Information Extraction from Narrative MedImages, (b) Automatic categorisation on image resolution with an impact on MedIMG, and (c) Hybrid Model to Predictions of Named Entity Recognition utilising RNN + LSTM + GRM that perform admirably in every trainee for every input purpose. At the same time, supplying understandable scale weight implies that such multi-scale structures are also crucial for extracting information from high-resolution MedIMG. A portion of the reports (30%) are manually evaluated by trained physicians, while the rest were automatically categorised using deep supervised training models based on attention mechanisms and supplied with test reports. MetaMapLite proved recall and precision, but also an F1-score equivalent for primary biomedicine text search techniques and medical text examination on many databases of MedIMG. In addition to implementing as well as getting the requirements for MedIMG, the article explores the quality of medical data by using DL techniques for reaching large-scale labelled clinical data and also the significance of their real-time efforts in the biomedical study that have played an instrumental role in its extramural diffusion and global appeal.
Collapse
Affiliation(s)
- Surbhi Bhatia
- Department of Information Systems, College of Computer Science and Information Technology, King Faisal University, Al Hasa, Saudi Arabia,*Correspondence: Surbhi Bhatia
| | - Mohammed Alojail
- Department of Information Systems, College of Computer Science and Information Technology, King Faisal University, Al Hasa, Saudi Arabia
| | - Sudhakar Sengan
- Department of Computer Science and Engineering, PSN College of Engineering and Technology, Tirunelveli, India
| | - Pankaj Dadheech
- Department of Computer Science and Engineering, Swami Keshvanand Institute of Technology, Management & Gramothan (SKIT), Jaipur, India
| |
Collapse
|
122
|
Liu X, Pang Y, Jin R, Liu Y, Wang Z. Dual-Domain Reconstruction Network with V-Net and K-Net for Fast MRI. Magn Reson Med 2022; 88:2694-2708. [PMID: 35942977 DOI: 10.1002/mrm.29400] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 07/05/2022] [Accepted: 07/08/2022] [Indexed: 11/10/2022]
Abstract
PURPOSE To introduce a dual-domain reconstruction network with V-Net and K-Net for accurate MR image reconstruction from undersampled k-space data. METHODS Most state-of-the-art reconstruction methods apply U-Net or cascaded U-Nets in the image domain and/or k-space domain. Nevertheless, these methods have the following problems: (1) directly applying U-Net in the k-space domain is not optimal for extracting features; (2) classical image-domain-oriented U-Net is heavyweighted and hence inefficient when cascaded many times to yield good reconstruction accuracy; (3) classical image-domain-oriented U-Net does not make full use of information of the encoder network for extracting features in the decoder network; and (4) existing methods are ineffective in simultaneously extracting and fusing features in the image domain and its dual k-space domain. To tackle these problems, we present 3 different methods: (1) V-Net, an image-domain encoder-decoder subnetwork that is more lightweight for cascading and effective in fully utilizing features in the encoder for decoding; (2) K-Net, a k-space domain subnetwork that is more suitable for extracting hierarchical features in the k-space domain, and (3) KV-Net, a dual-domain reconstruction network in which V-Nets and K-Nets are effectively combined and cascaded. RESULTS Extensive experimental results on the fastMRI dataset demonstrate that the proposed KV-Net can reconstruct high-quality images and outperform state-of-the-art approaches with fewer parameters. CONCLUSIONS To reconstruct images effectively and efficiently from incomplete k-space data, we have presented a dual-domain KV-Net to combine K-Nets and V-Nets. The KV-Net achieves better results with 9% and 5% parameters than comparable methods (XPD-Net and i-RIM).
Collapse
Affiliation(s)
- Xiaohan Liu
- Tianjin Key Lab. of Brain Inspired Intelligence Technology, School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of China
| | - Yanwei Pang
- Tianjin Key Lab. of Brain Inspired Intelligence Technology, School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of China
| | - Ruiqi Jin
- Tianjin Key Lab. of Brain Inspired Intelligence Technology, School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of China
| | - Yu Liu
- Tianjin Key Lab. of Brain Inspired Intelligence Technology, School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of China
| | - Zhenchang Wang
- Beijing Friendship Hospital, Capital Medical University, Beijing, People's Republic of China
| |
Collapse
|
123
|
He X, Cai W, Li F, Zhang P, Reyngold M, Cuaron JJ, Cerviño LI, Li T, Li X. Automatic stent recognition using perceptual attention U-net for quantitative intrafraction motion monitoring in pancreatic cancer radiotherapy. Med Phys 2022; 49:5283-5293. [PMID: 35524706 PMCID: PMC9827417 DOI: 10.1002/mp.15692] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 03/26/2022] [Accepted: 04/14/2022] [Indexed: 01/11/2023] Open
Abstract
PURPOSE Stent has often been used as an internal surrogate to monitor intrafraction tumor motion during pancreatic cancer radiotherapy. Based on the stent contours generated from planning CT images, the current intrafraction motion review (IMR) system on Varian TrueBeam only provides a tool to verify the stent motion visually but lacks quantitative information. The purpose of this study is to develop an automatic stent recognition method for quantitative intrafraction tumor motion monitoring in pancreatic cancer treatment. METHODS A total of 535 IMR images from 14 pancreatic cancer patients were retrospectively selected in this study, with the manual contour of the stent on each image serving as the ground truth. We developed a deep learning-based approach that integrates two mechanisms that focus on the features of the segmentation target. The objective attention modeling was integrated into the U-net framework to deal with the optimization difficulties when training a deep network with 2D IMR images and limited training data. A perceptual loss was combined with the binary cross-entropy loss and a Dice loss for supervision. The deep neural network was trained to capture more contextual information to predict binary stent masks. A random-split test was performed, with images of ten patients (71%, 380 images) randomly selected for training, whereas the rest of four patients (29%, 155 images) were used for testing. Sevenfold cross-validation of the proposed PAUnet on the 14 patients was performed for further evaluation. RESULTS Our stent segmentation results were compared with the manually segmented contours. For the random-split test, the trained model achieved a mean (±standard deviation) stent Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), the center-of-mass distance (CMD), and volume difference V o l d i f f $Vo{l_{diff}}$ were 0.96 (±0.01), 1.01 (±0.55) mm, 0.66 (±0.46) mm, and 3.07% (±2.37%), respectively. The sevenfold cross-validation of the proposed PAUnet had the mean (±standard deviation) of 0.96 (±0.02), 0.72 (±0.49) mm, 0.85 (±0.96) mm, and 3.47% (±3.27%) for the DSC, HD95, CMD, and V o l d i f f $Vo{l_{diff}}$ . CONCLUSION We developed a novel deep learning-based approach to automatically segment the stent from IMR images, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for quantitative intrafraction motion monitoring in pancreatic cancer radiotherapy.
Collapse
Affiliation(s)
- Xiuxiu He
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - Weixing Cai
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - Feifei Li
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - Pengpeng Zhang
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - Marsha Reyngold
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - John J. Cuaron
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - Laura I. Cerviño
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - Tianfang Li
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - Xiang Li
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
- Corresponding Author: Xiang Li, Ph.D., Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, 1275 York Avenue, New York, NY 10065, Tel: (516) 559-1501,
| |
Collapse
|
124
|
Agbley BLY, Li J, Hossin MA, Nneji GU, Jackson J, Monday HN, James EC. Federated Learning-Based Detection of Invasive Carcinoma of No Special Type with Histopathological Images. Diagnostics (Basel) 2022; 12:diagnostics12071669. [PMID: 35885573 PMCID: PMC9323034 DOI: 10.3390/diagnostics12071669] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 07/03/2022] [Accepted: 07/05/2022] [Indexed: 11/16/2022] Open
Abstract
Invasive carcinoma of no special type (IC-NST) is known to be one of the most prevalent kinds of breast cancer, hence the growing research interest in studying automated systems that can detect the presence of breast tumors and appropriately classify them into subtypes. Machine learning (ML) and, more specifically, deep learning (DL) techniques have been used to approach this problem. However, such techniques usually require massive amounts of data to obtain competitive results. This requirement makes their application in specific areas such as health problematic as privacy concerns regarding the release of patients’ data publicly result in a limited number of publicly available datasets for the research community. This paper proposes an approach that leverages federated learning (FL) to securely train mathematical models over multiple clients with local IC-NST images partitioned from the breast histopathology image (BHI) dataset to obtain a global model. First, we used residual neural networks for automatic feature extraction. Then, we proposed a second network consisting of Gabor kernels to extract another set of features from the IC-NST dataset. After that, we performed a late fusion of the two sets of features and passed the output through a custom classifier. Experiments were conducted for the federated learning (FL) and centralized learning (CL) scenarios, and the results were compared. Competitive results were obtained, indicating the positive prospects of adopting FL for IC-NST detection. Additionally, fusing the Gabor features with the residual neural network features resulted in the best performance in terms of accuracy, F1 score, and area under the receiver operation curve (AUC-ROC). The models show good generalization by performing well on another domain dataset, the breast cancer histopathological (BreakHis) image dataset. Our method also outperformed other methods from the literature.
Collapse
Affiliation(s)
- Bless Lord Y. Agbley
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; (B.L.Y.A.); (H.N.M.)
| | - Jianping Li
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; (B.L.Y.A.); (H.N.M.)
- Correspondence:
| | - Md Altab Hossin
- School of Innovation and Entrepreneurship, Chengdu University, Chengdu 610106, China;
| | - Grace Ugochi Nneji
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; (G.U.N.); (J.J.); (E.C.J.)
| | - Jehoiada Jackson
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; (G.U.N.); (J.J.); (E.C.J.)
| | - Happy Nkanta Monday
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; (B.L.Y.A.); (H.N.M.)
| | - Edidiong Christopher James
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; (G.U.N.); (J.J.); (E.C.J.)
| |
Collapse
|
125
|
Hassannataj Joloudari J, Mojrian S, Nodehi I, Mashmool A, Kiani Zadegan Z, Khanjani Shirkharkolaie S, Alizadehsani R, Tamadon T, Khosravi S, Akbari Kohnehshari M, Hassannatajjeloudari E, Sharifrazi D, Mosavi A, Loh HW, Tan RS, Acharya UR. Application of artificial intelligence techniques for automated detection of myocardial infarction: a review. Physiol Meas 2022; 43. [PMID: 35803247 DOI: 10.1088/1361-6579/ac7fd9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 07/08/2022] [Indexed: 11/11/2022]
Abstract
Myocardial infarction (MI) results in heart muscle injury due to receiving insufficient blood flow. MI is the most common cause of mortality in middle-aged and elderly individuals worldwide. To diagnose MI, clinicians need to interpret electrocardiography (ECG) signals, which requires expertise and is subject to observer bias. Artificial intelligence-based methods can be utilized to screen for or diagnose MI automatically using ECG signals. In this work, we conducted a comprehensive assessment of artificial intelligence-based approaches for MI detection based on ECG and some other biophysical signals, including machine learning (ML) and deep learning (DL) models. The performance of traditional ML methods relies on handcrafted features and manual selection of ECG signals, whereas DL models can automate these tasks. The review observed that deep convolutional neural networks (DCNNs) yielded excellent classification performance for MI diagnosis, which explains why they have become prevalent in recent years. To our knowledge, this is the first comprehensive survey of artificial intelligence techniques employed for MI diagnosis using ECG and some other biophysical signals.
Collapse
Affiliation(s)
- Javad Hassannataj Joloudari
- Computer Engineering, University of Birjand, South Khorasan Province, Birjand, Iran, Birjand, South Khorasan, 9717434765, Iran (the Islamic Republic of)
| | - Sanaz Mojrian
- Mazandaran University of Science and Technology, Mazandaran Province, Babol, Danesh 5, No. Sheykh Tabarasi, Iran, Babol, 47166-85635, Iran (the Islamic Republic of)
| | - Issa Nodehi
- University of Qom, Qom, shahid khodakaram blvd، Iran, Qom, Qom, 1519-37195, Iran (the Islamic Republic of)
| | - Amir Mashmool
- University of Geneva, Via del Molo, 65, 16128 Genova GE, Italy, Geneva, Geneva, 16121, ITALY
| | - Zeynab Kiani Zadegan
- University of Birjand, South Khorasan Province, Birjand, Iran, Birjand, 9717434765, Iran (the Islamic Republic of)
| | - Sahar Khanjani Shirkharkolaie
- Mazandaran University of Science and Technology, Mazandaran Province, Babol, Danesh 5, No. Sheykh Tabarasi, Iran, Babol, 47166-85635, Iran (the Islamic Republic of)
| | - Roohallah Alizadehsani
- Deakin University - Geelong Waterfront Campus, IISRI, Geelong, Victoria, 3220, AUSTRALIA
| | - Tahereh Tamadon
- University of Birjand, South Khorasan Province, Birjand, Iran, Birjand, 9717434765, Iran (the Islamic Republic of)
| | - Samiyeh Khosravi
- University of Birjand, South Khorasan Province, Birjand, Iran, Birjand, 9717434765, Iran (the Islamic Republic of)
| | - Mitra Akbari Kohnehshari
- Bu Ali Sina University, QFRQ+V8H District 2, Hamedan, Iran, Hamedan, Hamedan, 6516738695, Iran (the Islamic Republic of)
| | - Edris Hassannatajjeloudari
- Maragheh University of Medical Sciences, 87VG+9J6, Maragheh, East Azerbaijan Province, Iran, Maragheh, East Azerbaijan, 55158-78151, Iran (the Islamic Republic of)
| | - Danial Sharifrazi
- Islamic Azad University Shiraz, Shiraz University, Iran, Shiraz, Fars, 74731-71987, Iran (the Islamic Republic of)
| | - Amir Mosavi
- Faculty of Informatics, Obuda University, Faculty of Informatics, Obuda University, Budapest, Hungary, Budapest, 1034, HUNGARY
| | - Hui Wen Loh
- Singapore University of Social Sciences, SG, Clementi Rd, 463, Singapore 599494, Singapore, 599491, SINGAPORE
| | - Ru-San Tan
- Department of Cardiology, National Heart Centre Singapore, 5 Hospital Dr, Singapore 169609, Singapore, 168752, SINGAPORE
| | - U Rajendra Acharya
- Electronic Computer Engineering Division, Ngee Ann Polytechnic, 535 Clementi Road, Singapore 599489, Singapore, 599489, SINGAPORE
| |
Collapse
|
126
|
He L. Non-rigid Multi-Modal Medical Image Registration Based on Improved Maximum Mutual Information PV Image Interpolation Method. Front Public Health 2022; 10:863307. [PMID: 35719652 PMCID: PMC9198292 DOI: 10.3389/fpubh.2022.863307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 03/21/2022] [Indexed: 11/13/2022] Open
Abstract
With the continuous improvement of medical imaging equipment, CT, MRI and PET images can obtain accurate anatomical information of the same patient site. However, due to the fuzziness of medical image physiological evaluation and the unhealthy understanding of objects, the registration effect of many methods is not ideal. Therefore, based on the medical image registration model of Partial Volume (PV) image interpolation method and rigid medical image registration method, this paper established the non-rigid registration model of maximum mutual information Novel Partial Volume (NPV) image interpolation method. The proposed NPV interpolation method uses the Davidon-Fletcher-Powell algorithm (DFP) algorithm optimization method to solve the transformation parameter matrix and realize the accurate transformation of the floating image. In addition, the cubic B-spline is used as the kernel function to improve the image interpolation, which effectively improves the accuracy of the registration image. Finally, the proposed NPV method is compared with the PV interpolation method through the human brain CT-MRI-PET image to obtain a clear CT-MRI-PET image. The results show that the proposed NPV method has higher accuracy, better robustness, and easier realization. The model should also have guiding significance in face recognition and fingerprint recognition.
Collapse
Affiliation(s)
- Liting He
- School of Computer and Information Science, Southwest University, Chongqing, China
| |
Collapse
|
127
|
Multi-Scale Tumor Localization Based on Priori Guidance-Based Segmentation Method for Osteosarcoma MRI Images. MATHEMATICS 2022. [DOI: 10.3390/math10122099] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Osteosarcoma is a malignant osteosarcoma that is extremely harmful to human health. Magnetic resonance imaging (MRI) technology is one of the commonly used methods for the imaging examination of osteosarcoma. Due to the large amount of osteosarcoma MRI image data and the complexity of detection, manual identification of osteosarcoma in MRI images is a time-consuming and labor-intensive task for doctors, and it is highly subjective, which can easily lead to missed and misdiagnosed problems. AI medical image-assisted diagnosis alleviates this problem. However, the brightness of MRI images and the multi-scale of osteosarcoma make existing studies still face great challenges in the identification of tumor boundaries. Based on this, this study proposed a prior guidance-based assisted segmentation method for MRI images of osteosarcoma, which is based on the few-shot technique for tumor segmentation and fine fitting. It not only solves the problem of multi-scale tumor localization, but also greatly improves the recognition accuracy of tumor boundaries. First, we preprocessed the MRI images using prior generation and normalization algorithms to reduce model performance degradation caused by irrelevant regions and high-level features. Then, we used a prior-guided feature abdominal muscle network to perform small-sample segmentation of tumors of different sizes based on features in the processed MRI images. Finally, using more than 80,000 MRI images from the Second Xiangya Hospital for experiments, the DOU value of the method proposed in this paper reached 0.945, which is at least 4.3% higher than other models in the experiment. We showed that our method specifically has higher prediction accuracy and lower resource consumption.
Collapse
|
128
|
Liberini V, Laudicella R, Balma M, Nicolotti DG, Buschiazzo A, Grimaldi S, Lorenzon L, Bianchi A, Peano S, Bartolotta TV, Farsad M, Baldari S, Burger IA, Huellner MW, Papaleo A, Deandreis D. Radiomics and artificial intelligence in prostate cancer: new tools for molecular hybrid imaging and theragnostics. Eur Radiol Exp 2022; 6:27. [PMID: 35701671 PMCID: PMC9198151 DOI: 10.1186/s41747-022-00282-0] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 04/20/2022] [Indexed: 11/21/2022] Open
Abstract
In prostate cancer (PCa), the use of new radiopharmaceuticals has improved the accuracy of diagnosis and staging, refined surveillance strategies, and introduced specific and personalized radioreceptor therapies. Nuclear medicine, therefore, holds great promise for improving the quality of life of PCa patients, through managing and processing a vast amount of molecular imaging data and beyond, using a multi-omics approach and improving patients’ risk-stratification for tailored medicine. Artificial intelligence (AI) and radiomics may allow clinicians to improve the overall efficiency and accuracy of using these “big data” in both the diagnostic and theragnostic field: from technical aspects (such as semi-automatization of tumor segmentation, image reconstruction, and interpretation) to clinical outcomes, improving a deeper understanding of the molecular environment of PCa, refining personalized treatment strategies, and increasing the ability to predict the outcome. This systematic review aims to describe the current literature on AI and radiomics applied to molecular imaging of prostate cancer.
Collapse
Affiliation(s)
- Virginia Liberini
- Medical Physiopathology - A.O.U. Città della Salute e della Scienza di Torino, Division of Nuclear Medicine, Department of Medical Science, University of Torino, 10126, Torino, Italy. .,Nuclear Medicine Department, S. Croce e Carle Hospital, 12100, Cuneo, Italy.
| | - Riccardo Laudicella
- Department of Nuclear Medicine, University Hospital Zurich, University of Zurich, 8006, Zurich, Switzerland.,Nuclear Medicine Unit, Department of Biomedical and Dental Sciences and of Morpho-Functional Imaging, University of Messina, 98125, Messina, Italy.,Nuclear Medicine Unit, Fondazione Istituto G. Giglio, Ct.da Pietrapollastra Pisciotto, Cefalù, Palermo, Italy
| | - Michele Balma
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100, Cuneo, Italy
| | | | - Ambra Buschiazzo
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100, Cuneo, Italy
| | - Serena Grimaldi
- Medical Physiopathology - A.O.U. Città della Salute e della Scienza di Torino, Division of Nuclear Medicine, Department of Medical Science, University of Torino, 10126, Torino, Italy
| | - Leda Lorenzon
- Medical Physics Department, Central Bolzano Hospital, 39100, Bolzano, Italy
| | - Andrea Bianchi
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100, Cuneo, Italy
| | - Simona Peano
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100, Cuneo, Italy
| | | | - Mohsen Farsad
- Nuclear Medicine, Central Hospital Bolzano, 39100, Bolzano, Italy
| | - Sergio Baldari
- Nuclear Medicine Unit, Department of Biomedical and Dental Sciences and of Morpho-Functional Imaging, University of Messina, 98125, Messina, Italy
| | - Irene A Burger
- Department of Nuclear Medicine, University Hospital Zurich, University of Zurich, 8006, Zurich, Switzerland.,Department of Nuclear Medicine, Kantonsspital Baden, 5004, Baden, Switzerland
| | - Martin W Huellner
- Department of Nuclear Medicine, University Hospital Zurich, University of Zurich, 8006, Zurich, Switzerland
| | - Alberto Papaleo
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100, Cuneo, Italy
| | - Désirée Deandreis
- Medical Physiopathology - A.O.U. Città della Salute e della Scienza di Torino, Division of Nuclear Medicine, Department of Medical Science, University of Torino, 10126, Torino, Italy
| |
Collapse
|
129
|
Kantheti B, Javvaji MK. Medical Image Classification for Disease Prediction with the aid of Deep Learning approaches. 2022 6TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND CONTROL SYSTEMS (ICICCS) 2022:1442-1445. [DOI: 10.1109/iciccs53718.2022.9788144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Affiliation(s)
- Bhargav Kantheti
- GIT, GITAM,Department of Computer Science and Engineering,Visakhapatnam,Andhra Pradesh,India
| | - Mukesh Kumar Javvaji
- GIT, GITAM,Department of Computer Science and Engineering,Visakhapatnam,Andhra Pradesh,India
| |
Collapse
|
130
|
Blaivas M, Blaivas LN, Tsung JW. Deep Learning Pitfall: Impact of Novel Ultrasound Equipment Introduction on Algorithm Performance and the Realities of Domain Adaptation. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2022; 41:855-863. [PMID: 34133034 DOI: 10.1002/jum.15765] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Revised: 05/03/2021] [Accepted: 05/17/2021] [Indexed: 06/12/2023]
Abstract
OBJECTIVES To test deep learning (DL) algorithm performance repercussions by introducing novel ultrasound equipment into a clinical setting. METHODS Researchers introduced prospectively obtained inferior vena cava (IVC) videos from a similar patient population using novel ultrasound equipment to challenge a previously validated DL algorithm (trained on a common point of care ultrasound [POCUS] machine) to assess IVC collapse. Twenty-one new videos were obtained for each novel ultrasound machine. The videos were analyzed for complete collapse by the algorithm and by 2 blinded POCUS experts. Cohen's kappa was calculated for agreement between the 2 POCUS experts and DL algorithm. Previous testing showed substantial agreement between algorithm and experts with Cohen's kappa of 0.78 (95% CI 0.49-1.0) and 0.66 (95% CI 0.31-1.0) on new patient data using, the same ultrasound equipment. RESULTS Challenged with higher image quality (IQ) POCUS cart ultrasound videos, algorithm performance declined with kappa values of 0.31 (95% CI 0.19-0.81) and 0.39 (95% CI 0.11-0.89), showing fair agreement. Algorithm performance plummeted on a lower IQ, smartphone device with a kappa value of -0.09 (95% CI -0.95 to 0.76) and 0.09 (95% CI -0.65 to 0.82), respectively, showing less agreement than would be expected by chance. Two POCUS experts had near perfect agreement with a kappa value of 0.88 (95% CI 0.64-1.0) regarding IVC collapse. CONCLUSIONS Performance of this previously validated DL algorithm worsened when faced with ultrasound studies from 2 novel ultrasound machines. Performance was much worse on images from a lower IQ hand-held device than from a superior cart-based device.
Collapse
Affiliation(s)
- Michael Blaivas
- Department of Medicine, University of South Carolina School of Medicine, Columbia, South Carolina, USA
- Department of Emergency Medicine, St. Francis Hospital, Columbus, Georgia, USA
| | | | - James W Tsung
- Department of Emergency Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| |
Collapse
|
131
|
Bradshaw TJ, Boellaard R, Dutta J, Jha AK, Jacobs P, Li Q, Liu C, Sitek A, Saboury B, Scott PJH, Slomka PJ, Sunderland JJ, Wahl RL, Yousefirizi F, Zuehlsdorff S, Rahmim A, Buvat I. Nuclear Medicine and Artificial Intelligence: Best Practices for Algorithm Development. J Nucl Med 2022; 63:500-510. [PMID: 34740952 PMCID: PMC10949110 DOI: 10.2967/jnumed.121.262567] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 11/01/2021] [Indexed: 11/16/2022] Open
Abstract
The nuclear medicine field has seen a rapid expansion of academic and commercial interest in developing artificial intelligence (AI) algorithms. Users and developers can avoid some of the pitfalls of AI by recognizing and following best practices in AI algorithm development. In this article, recommendations on technical best practices for developing AI algorithms in nuclear medicine are provided, beginning with general recommendations and then continuing with descriptions of how one might practice these principles for specific topics within nuclear medicine. This report was produced by the AI Task Force of the Society of Nuclear Medicine and Molecular Imaging.
Collapse
Affiliation(s)
- Tyler J Bradshaw
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin;
| | - Ronald Boellaard
- Department of Radiology and Nuclear Medicine, Cancer Centre Amsterdam, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, Massachusetts
| | - Abhinav K Jha
- Department of Biomedical Engineering and Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri
| | | | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut
| | | | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Peter J H Scott
- Department of Radiology, University of Michigan Medical School, Ann Arbor, Michigan
| | - Piotr J Slomka
- Department of Imaging, Medicine, and Cardiology, Cedars-Sinai Medical Center, Los Angeles, California
| | - John J Sunderland
- Departments of Radiology and Physics, University of Iowa, Iowa City, Iowa
| | - Richard L Wahl
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri
| | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada
| | | | - Arman Rahmim
- Departments of Radiology and Physics, University of British Columbia, Vancouver, British Columbia, Canada; and
| | - Irène Buvat
- Institut Curie, Université PSL, INSERM, Université Paris-Saclay, Orsay, France
| |
Collapse
|
132
|
Bockhold S, Foley SJ, Rainford LA, Corridori R, Eberstein A, Hoeschen C, Konijnenberg MW, Molyneux-Hodgson S, Paulo G, Santos J, McNulty JP. Exploring the translational challenge for medical applications of ionising radiation and corresponding radiation protection research. J Transl Med 2022; 20:137. [PMID: 35303930 PMCID: PMC8932076 DOI: 10.1186/s12967-022-03344-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 03/06/2022] [Indexed: 01/19/2023] Open
Abstract
Background Medical applications of ionising radiation and associated radiation protection research often encounter long delays and inconsistent implementation when translated into clinical practice. A coordinated effort is needed to analyse the research needs for innovation transfer in radiation-based high-quality healthcare across Europe which can inform the development of an innovation transfer framework tailored for equitable implementation of radiation research at scale. Methods Between March and September 2021 a Delphi methodology was employed to gain consensus on key translational challenges from a range of professional stakeholders. A total of three Delphi rounds were conducted using a series of electronic surveys comprised of open-ended and closed-type questions. The surveys were disseminated via the EURAMED Rocc-n-Roll consortium network and prominent medical societies in the field. Approximately 350 professionals were invited to participate. Participants’ level of agreement with each generated statement was captured using a 6-point Likert scale. Consensus was defined as median ≥ 4 with ≥ 60% of responses in the upper tertile of the scale. Additionally, the stability of responses across rounds was assessed. Results In the first Delphi round a multidisciplinary panel of 20 generated 127 unique statements. The second and third Delphi rounds recruited a broader sample of 130 individuals to rate the extent to which they agreed with each statement as a key translational challenge. A total of 60 consensus statements resulted from the iterative Delphi process of which 55 demonstrated good stability. Ten statements were identified as high priority challenges with ≥ 80% of statement ratings either ‘Agree’ or ‘Strongly Agree’. Conclusion A lack of interoperability between systems, insufficient resources, unsatisfactory education and training, and the need for greater public awareness surrounding the benefits, risks, and applications of ionising radiation were identified as principal translational challenges. These findings will help to inform a tailored innovation transfer framework for medical radiation research. Supplementary Information The online version contains supplementary material available at 10.1186/s12967-022-03344-4.
Collapse
Affiliation(s)
- Sophie Bockhold
- Radiography and Diagnostic Imaging, School of Medicine, University College Dublin, Belfield, Dublin 4, Ireland.
| | - Shane J Foley
- Radiography and Diagnostic Imaging, School of Medicine, University College Dublin, Belfield, Dublin 4, Ireland
| | - Louise A Rainford
- Radiography and Diagnostic Imaging, School of Medicine, University College Dublin, Belfield, Dublin 4, Ireland
| | | | | | - Christoph Hoeschen
- Institute of Medical Engineering, Otto Von Guericke Universität Magdeburg, Magdeburg, Germany
| | - Mark W Konijnenberg
- Department of Radiology and Nuclear Medicine, Erasmus Medical Centre, Rotterdam, Netherlands
| | | | - Graciano Paulo
- Escola Superior de Tecnologia da Saúde, Instituto Politécnico de Coimbra, Coimbra, Portugal
| | - Joana Santos
- Escola Superior de Tecnologia da Saúde, Instituto Politécnico de Coimbra, Coimbra, Portugal
| | - Jonathan P McNulty
- Radiography and Diagnostic Imaging, School of Medicine, University College Dublin, Belfield, Dublin 4, Ireland
| |
Collapse
|
133
|
Shi W, Xu T, Yang H, Xi Y, Du Y, Li J, Li J. Attention Gate based dual-pathway Network for Vertebra Segmentation of X-ray Spine images. IEEE J Biomed Health Inform 2022; 26:3976-3987. [PMID: 35290194 DOI: 10.1109/jbhi.2022.3158968] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Automatic spine and vertebra segmentation from X-ray spine images is a critical and challenging problem in many computer-aid spinal image analysis and disease diagnosis applications. In this paper, a two-stage automatic segmentation framework for spine X-ray images is proposed, which can firstly locate the spine regions (including backbone, sacrum and illum) in the coarse stage and then identify eighteen vertebrae (i.e., cervical vertebra 1, thoracic vertebra 1-12 and lumbar vertebra 1-5) with isolate and clear boundary in the fine stage. A novel Attention Gate based dual-pathway Network (AGNet) composed of context and edge pathways is designed to extract semantic and boundary information for segmentation of both spine and vertebra regions. Multi-scale supervision mechanism is applied to explore comprehensive features and an Edge aware Fusion Mechanism (EFM) is proposed to fuse features extracted from the two pathways. Some other image processing skills, such as centralized backbone clipping, patch cropping and convex hull detection are introduced to further refine the vertebra segmentation results. Experimental validations on spine X-ray images dataset and vertebrae dataset suggest that the proposed AGNet achieves superior performance compared with state-of-the-art segmentation methods, and the coarse-to-fine framework can be implemented in real spinal diagnosis systems.
Collapse
|
134
|
Ma Y, Zhang C, Cabezas M, Song Y, Tang Z, Liu D, Cai W, Barnett M, Wang C. Multiple Sclerosis Lesion Analysis in Brain Magnetic Resonance Images: Techniques and Clinical Applications. IEEE J Biomed Health Inform 2022; 26:2680-2692. [PMID: 35171783 DOI: 10.1109/jbhi.2022.3151741] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Multiple sclerosis (MS) is a chronic inflammatory and degenerative disease of the central nervous system, characterized by the appearance of focal lesions in the white and gray matter that topographically correlate with an individual patients neurological symptoms and signs. Magnetic resonance imaging (MRI) provides detailed in-vivo structural information, permitting the quantification and categorization of MS lesions that critically inform disease management. Traditionally, MS lesions have been manually annotated on 2D MRI slices, a process that is inefficient and prone to inter-/intra-observer errors. Recently, automated statistical imaging analysis techniques have been proposed to detect and segment MS lesions based on MRI voxel intensity. However, their effectiveness is limited by the heterogeneity of both MRI data acquisition techniques and the appearance of MS lesions. By learning complex lesion representations directly from images, deep learning techniques have achieved remarkable breakthroughs in the MS lesion segmentation task. Here, we provide a comprehensive review of state-of-the-art automatic statistical and deep-learning MS segmentation methods and discuss current and future clinical applications. Further, we review technical strategies, such as domain adaptation, to enhance MS lesion segmentation in real-world clinical settings.
Collapse
|
135
|
Tangudu VSK, Kakarla J, Venkateswarlu IB. COVID-19 detection from chest x-ray using MobileNet and residual separable convolution block. Soft comput 2022; 26:2197-2208. [PMID: 35106060 PMCID: PMC8794607 DOI: 10.1007/s00500-021-06579-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/10/2021] [Indexed: 10/27/2022]
Abstract
A newly emerged coronavirus disease affects the social and economical life of the world. This virus mainly infects the respiratory system and spreads with airborne communication. Several countries witness the serious consequences of the COVID-19 pandemic. Early detection of COVID-19 infection is the critical step to survive a patient from death. The chest radiography examination is the fast and cost-effective way for COVID-19 detection. Several researchers have been motivated to automate COVID-19 detection and diagnosis process using chest x-ray images. However, existing models employ deep networks and are suffering from high training time. This work presents transfer learning and residual separable convolution block for COVID-19 detection. The proposed model utilizes pre-trained MobileNet for binary image classification. The proposed residual separable convolution block has improved the performance of basic MobileNet. Two publicly available datasets COVID5K, and COVIDRD have considered for the evaluation of the proposed model. Our proposed model exhibits superior performance than existing state-of-art and pre-trained models with 99% accuracy on both datasets. We have achieved similar performance on noisy datasets. Moreover, the proposed model outperforms existing pre-trained models with less training time and competitive performance than basic MobileNet. Further, our model is suitable for mobile applications as it uses fewer parameters and lesser training time.
Collapse
Affiliation(s)
| | - Jagadeesh Kakarla
- Indian Institute of Information Technology, Design and Manufacturing, Kancheepuram, Chennai, India
| | | |
Collapse
|
136
|
Gour N, Tanveer M, Khanna P. Challenges for ocular disease identification in the era of artificial intelligence. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06770-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
137
|
Wang S, Hou Y, Li X, Meng X, Zhang Y, Wang X. Practical Implementation of Artificial Intelligence-Based Deep Learning and Cloud Computing on the Application of Traditional Medicine and Western Medicine in the Diagnosis and Treatment of Rheumatoid Arthritis. Front Pharmacol 2022; 12:765435. [PMID: 35002704 PMCID: PMC8733656 DOI: 10.3389/fphar.2021.765435] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Accepted: 12/09/2021] [Indexed: 12/23/2022] Open
Abstract
Rheumatoid arthritis (RA), an autoimmune disease of unknown etiology, is a serious threat to the health of middle-aged and elderly people. Although western medicine, traditional medicine such as traditional Chinese medicine, Tibetan medicine and other ethnic medicine have shown certain advantages in the diagnosis and treatment of RA, there are still some practical shortcomings, such as delayed diagnosis, improper treatment scheme and unclear drug mechanism. At present, the applications of artificial intelligence (AI)-based deep learning and cloud computing has aroused wide attention in the medical and health field, especially in screening potential active ingredients, targets and action pathways of single drugs or prescriptions in traditional medicine and optimizing disease diagnosis and treatment models. Integrated information and analysis of RA patients based on AI and medical big data will unquestionably benefit more RA patients worldwide. In this review, we mainly elaborated the application status and prospect of AI-assisted deep learning and cloud computation-oriented western medicine and traditional medicine on the diagnosis and treatment of RA in different stages. It can be predicted that with the help of AI, more pharmacological mechanisms of effective ethnic drugs against RA will be elucidated and more accurate solutions will be provided for the treatment and diagnosis of RA in the future.
Collapse
Affiliation(s)
- Shaohui Wang
- School of Ethnic Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Ya Hou
- School of Pharmacy, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Xuanhao Li
- Chengdu Second People's Hospital, Chengdu, China
| | - Xianli Meng
- State Key Laboratory of Southwestern Chinese Medicine Resources, Innovative Institute of Chinese Medicine and Pharmacy, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Yi Zhang
- School of Ethnic Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Xiaobo Wang
- State Key Laboratory of Southwestern Chinese Medicine Resources, Innovative Institute of Chinese Medicine and Pharmacy, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| |
Collapse
|
138
|
Li X, Xia C, Li X, Wei S, Zhou S, Yu X, Gao J, Cao Y, Zhang H. Identifying diabetes from conjunctival images using a novel hierarchical multi-task network. Sci Rep 2022; 12:264. [PMID: 34997031 PMCID: PMC8742044 DOI: 10.1038/s41598-021-04006-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Accepted: 12/06/2021] [Indexed: 11/15/2022] Open
Abstract
Diabetes can cause microvessel impairment. However, these conjunctival pathological changes are not easily recognized, limiting their potential as independent diagnostic indicators. Therefore, we designed a deep learning model to explore the relationship between conjunctival features and diabetes, and to advance automated identification of diabetes through conjunctival images. Images were collected from patients with type 2 diabetes and healthy volunteers. A hierarchical multi-tasking network model (HMT-Net) was developed using conjunctival images, and the model was systematically evaluated and compared with other algorithms. The sensitivity, specificity, and accuracy of the HMT-Net model to identify diabetes were 78.70%, 69.08%, and 75.15%, respectively. The performance of the HMT-Net model was significantly better than that of ophthalmologists. The model allowed sensitive and rapid discrimination by assessment of conjunctival images and can be potentially useful for identifying diabetes.
Collapse
Affiliation(s)
- Xinyue Li
- Eye Hospital, The First Affiliated Hospital of Harbin Medical University, No.143, Yiman Street, Nangang District, Harbin City, 150001, Heilongjiang Province, China
- Key Laboratory of Basic and Clinical Research of Heilongjiang Province, Harbin, 150001, China
- Eye Department, Shanghai Children 's Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Chenjie Xia
- State Key Laboratory of Fluid Power and Mechatronic Systems, School of Mechanical Engineering, Zhejiang University, Room 230, Building 1, Yuquan Campus, 38 Zhe Da Road, Hangzhou, 310027, Zhejiang Province, China
| | - Xin Li
- School of Electrical Engineering and Computer Science, 2002 Digital Media Center, Louisiana State University, 340 E. Parker Blvd, Baton Rouge, LA, 70803, USA
| | - Shuangqing Wei
- School of Electrical Engineering and Computer Science, 2002 Digital Media Center, Louisiana State University, 340 E. Parker Blvd, Baton Rouge, LA, 70803, USA
| | - Sujun Zhou
- Eye Hospital, The First Affiliated Hospital of Harbin Medical University, No.143, Yiman Street, Nangang District, Harbin City, 150001, Heilongjiang Province, China
- Key Laboratory of Basic and Clinical Research of Heilongjiang Province, Harbin, 150001, China
| | - Xuhui Yu
- Eye Hospital, The First Affiliated Hospital of Harbin Medical University, No.143, Yiman Street, Nangang District, Harbin City, 150001, Heilongjiang Province, China
- Key Laboratory of Basic and Clinical Research of Heilongjiang Province, Harbin, 150001, China
| | - Jiayue Gao
- Eye Hospital, The First Affiliated Hospital of Harbin Medical University, No.143, Yiman Street, Nangang District, Harbin City, 150001, Heilongjiang Province, China
- Key Laboratory of Basic and Clinical Research of Heilongjiang Province, Harbin, 150001, China
| | - Yanpeng Cao
- State Key Laboratory of Fluid Power and Mechatronic Systems, School of Mechanical Engineering, Zhejiang University, Room 230, Building 1, Yuquan Campus, 38 Zhe Da Road, Hangzhou, 310027, Zhejiang Province, China.
| | - Hong Zhang
- Eye Hospital, The First Affiliated Hospital of Harbin Medical University, No.143, Yiman Street, Nangang District, Harbin City, 150001, Heilongjiang Province, China.
- Key Laboratory of Basic and Clinical Research of Heilongjiang Province, Harbin, 150001, China.
| |
Collapse
|
139
|
Shah SM, Khan RA, Arif S, Sajid U. Artificial intelligence for breast cancer analysis: Trends & directions. Comput Biol Med 2022; 142:105221. [PMID: 35016100 DOI: 10.1016/j.compbiomed.2022.105221] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Revised: 01/03/2022] [Accepted: 01/03/2022] [Indexed: 12/18/2022]
Abstract
Breast cancer is one of the leading causes of death among women. Early detection of breast cancer can significantly improve the lives of millions of women across the globe. Given importance of finding solution/framework for early detection and diagnosis, recently many AI researchers are focusing to automate this task. The other reasons for surge in research activities in this direction are advent of robust AI algorithms (deep learning), availability of hardware that can run/train those robust and complex AI algorithms and accessibility of large enough dataset required for training AI algorithms. Different imaging modalities that have been exploited by researchers to automate the task of breast cancer detection are mammograms, ultrasound, magnetic resonance imaging, histopathological images or any combination of them. This article analyzes these imaging modalities and presents their strengths and limitations. It also enlists resources from where their datasets can be accessed for research purpose. This article then summarizes AI and computer vision based state-of-the-art methods proposed in the last decade to detect breast cancer using various imaging modalities. Primarily, in this article we have focused on reviewing frameworks that have reported results using mammograms as it is the most widely used breast imaging modality that serves as the first test that medical practitioners usually prescribe for the detection of breast cancer. Another reason for focusing on mammogram imaging modalities is the availability of its labelled datasets. Datasets availability is one of the most important aspects for the development of AI based frameworks as such algorithms are data hungry and generally quality of dataset affects performance of AI based algorithms. In a nutshell, this research article will act as a primary resource for the research community working in the field of automated breast imaging analysis.
Collapse
Affiliation(s)
- Shahid Munir Shah
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Rizwan Ahmed Khan
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan.
| | - Sheeraz Arif
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Unaiza Sajid
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| |
Collapse
|
140
|
Foran DJ, Durbin EB, Chen W, Sadimin E, Sharma A, Banerjee I, Kurc T, Li N, Stroup AM, Harris G, Gu A, Schymura M, Gupta R, Bremer E, Balsamo J, DiPrima T, Wang F, Abousamra S, Samaras D, Hands I, Ward K, Saltz JH. An Expandable Informatics Framework for Enhancing Central Cancer Registries with Digital Pathology Specimens, Computational Imaging Tools, and Advanced Mining Capabilities. J Pathol Inform 2022; 13:5. [PMID: 35136672 PMCID: PMC8794027 DOI: 10.4103/jpi.jpi_31_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 04/30/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Population-based state cancer registries are an authoritative source for cancer statistics in the United States. They routinely collect a variety of data, including patient demographics, primary tumor site, stage at diagnosis, first course of treatment, and survival, on every cancer case that is reported across all U.S. states and territories. The goal of our project is to enrich NCI's Surveillance, Epidemiology, and End Results (SEER) registry data with high-quality population-based biospecimen data in the form of digital pathology, machine-learning-based classifications, and quantitative histopathology imaging feature sets (referred to here as Pathomics features). MATERIALS AND METHODS As part of the project, the underlying informatics infrastructure was designed, tested, and implemented through close collaboration with several participating SEER registries to ensure consistency with registry processes, computational scalability, and ability to support creation of population cohorts that span multiple sites. Utilizing computational imaging algorithms and methods to both generate indices and search for matches makes it possible to reduce inter- and intra-observer inconsistencies and to improve the objectivity with which large image repositories are interrogated. RESULTS Our team has created and continues to expand a well-curated repository of high-quality digitized pathology images corresponding to subjects whose data are routinely collected by the collaborating registries. Our team has systematically deployed and tested key, visual analytic methods to facilitate automated creation of population cohorts for epidemiological studies and tools to support visualization of feature clusters and evaluation of whole-slide images. As part of these efforts, we are developing and optimizing advanced search and matching algorithms to facilitate automated, content-based retrieval of digitized specimens based on their underlying image features and staining characteristics. CONCLUSION To meet the challenges of this project, we established the analytic pipelines, methods, and workflows to support the expansion and management of a growing repository of high-quality digitized pathology and information-rich, population cohorts containing objective imaging and clinical attributes to facilitate studies that seek to discriminate among different subtypes of disease, stratify patient populations, and perform comparisons of tumor characteristics within and across patient cohorts. We have also successfully developed a suite of tools based on a deep-learning method to perform quantitative characterizations of tumor regions, assess infiltrating lymphocyte distributions, and generate objective nuclear feature measurements. As part of these efforts, our team has implemented reliable methods that enable investigators to systematically search through large repositories to automatically retrieve digitized pathology specimens and correlated clinical data based on their computational signatures.
Collapse
Affiliation(s)
- David J. Foran
- Center for Biomedical Informatics, Rutgers Cancer Institute of New Jersey, New Brunswick, NJ, USA
- Department of Pathology and Laboratory Medicine, Rutgers-Robert Wood Johnson Medical School, Piscataway, NJ, USA
| | - Eric B. Durbin
- Kentucky Cancer Registry, Markey Cancer Center, University of Kentucky, Lexington, KY, USA
- Division of Biomedical Informatics, Department of Internal Medicine, College of Medicine, Lexington, KY, USA
| | - Wenjin Chen
- Center for Biomedical Informatics, Rutgers Cancer Institute of New Jersey, New Brunswick, NJ, USA
| | - Evita Sadimin
- Center for Biomedical Informatics, Rutgers Cancer Institute of New Jersey, New Brunswick, NJ, USA
- Department of Pathology and Laboratory Medicine, Rutgers-Robert Wood Johnson Medical School, Piscataway, NJ, USA
| | - Ashish Sharma
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA, USA
| | - Imon Banerjee
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA, USA
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| | - Nan Li
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA, USA
| | - Antoinette M. Stroup
- New Jersey State Cancer Registry, Rutgers Cancer Institute of New Jersey, New Brunswick, NJ, USA
| | - Gerald Harris
- New Jersey State Cancer Registry, Rutgers Cancer Institute of New Jersey, New Brunswick, NJ, USA
| | - Annie Gu
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA, USA
| | - Maria Schymura
- New York State Cancer Registry, New York State Department of Health, Albany, NY, USA
| | - Rajarsi Gupta
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| | - Erich Bremer
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| | - Joseph Balsamo
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| | - Tammy DiPrima
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| | - Feiqiao Wang
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| | - Shahira Abousamra
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Isaac Hands
- Division of Biomedical Informatics, Department of Internal Medicine, College of Medicine, Lexington, KY, USA
| | - Kevin Ward
- Georgia State Cancer Registry, Georgia Department of Public Health, Atlanta, GA, USA
| | - Joel H. Saltz
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| |
Collapse
|
141
|
Khan S, Banday SA, Alam M. Big Data for Treatment Planning: Pathways and Possibilities for Smart Healthcare Systems. Curr Med Imaging 2022; 19:19-26. [PMID: 34533449 DOI: 10.2174/1573405617666210917125642] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 07/14/2021] [Accepted: 07/16/2021] [Indexed: 11/22/2022]
Abstract
BACKGROUND Treatment planning is one of the crucial stages of healthcare assessment and delivery. Moreover, it also has a significant impact on patient outcomes and system efficiency. With the evolution of transformative healthcare technologies, most areas of healthcare have started collecting data at different levels, as a result of which there is a splurge in the size and complexity of health data being generated every minute. INTRODUCTION This paper explores the different characteristics of health data with respect to big data. Besides this, it also classifies research efforts in treatment planning on the basis of the informatics domain being used, which includes medical informatics, imaging informatics and translational bioinformatics. METHODS This is a survey paper that reviews existing literature on the use of big data technologies for treatment planning in the healthcare ecosystem. Therefore, a qualitative research methodology was adopted for this work. RESULTS Review of existing literature has been analyzed to identify potential gaps in research, identifying and providing insights into high prospect areas for potential future research. CONCLUSION The use of big data for treatment planning is rapidly evolving, and findings of this research can head start and streamline specific research pathways in the field.
Collapse
Affiliation(s)
- Samiya Khan
- School of Mathematics and Computer Science, University of Wolverhampton, Wolverhampton, United Kingdom
| | - Shoaib Amin Banday
- Department of Electronics & Communication, Islamic University of Science & Technology, Awantipora, India
| | - Mansaf Alam
- Department of Computer Science, Jamia Millia Islamia, New Delhi, India
| |
Collapse
|
142
|
Yang F, Weng X, Miao Y, Wu Y, Xie H, Lei P. Deep learning approach for automatic segmentation of ulna and radius in dual-energy X-ray imaging. Insights Imaging 2021; 12:191. [PMID: 34928449 PMCID: PMC8688680 DOI: 10.1186/s13244-021-01137-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 11/29/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Segmentation of the ulna and radius is a crucial step for the measurement of bone mineral density (BMD) in dual-energy X-ray imaging in patients suspected of having osteoporosis. PURPOSE This work aimed to propose a deep learning approach for the accurate automatic segmentation of the ulna and radius in dual-energy X-ray imaging. METHODS AND MATERIALS We developed a deep learning model with residual block (Resblock) for the segmentation of the ulna and radius. Three hundred and sixty subjects were included in the study, and five-fold cross-validation was used to evaluate the performance of the proposed network. The Dice coefficient and Jaccard index were calculated to evaluate the results of segmentation in this study. RESULTS The proposed network model had a better segmentation performance than the previous deep learning-based methods with respect to the automatic segmentation of the ulna and radius. The evaluation results suggested that the average Dice coefficients of the ulna and radius were 0.9835 and 0.9874, with average Jaccard indexes of 0.9680 and 0.9751, respectively. CONCLUSION The deep learning-based method developed in this study improved the segmentation performance of the ulna and radius in dual-energy X-ray imaging.
Collapse
Affiliation(s)
- Fan Yang
- School of Biology and Engineering, Guizhou Medical University, Guiyang, Guizhou Province, China
- Key Laboratory of Biology and Medical Engineering, Guizhou Medical University, Guiyang, Guizhou Province, China
| | - Xin Weng
- School of Biology and Engineering, Guizhou Medical University, Guiyang, Guizhou Province, China
- Key Laboratory of Biology and Medical Engineering, Guizhou Medical University, Guiyang, Guizhou Province, China
| | - Yuehong Miao
- School of Biology and Engineering, Guizhou Medical University, Guiyang, Guizhou Province, China
- Key Laboratory of Biology and Medical Engineering, Guizhou Medical University, Guiyang, Guizhou Province, China
| | - Yuhui Wu
- Department of Radiology, The Affiliated Hospital of Guizhou Medical University, No. 28, Guiyi Street, Yunyan District, Guiyang, 550004, Guizhou Province, China
| | - Hong Xie
- Department of Radiology, The Affiliated Hospital of Guizhou Medical University, No. 28, Guiyi Street, Yunyan District, Guiyang, 550004, Guizhou Province, China
| | - Pinggui Lei
- Department of Radiology, The Affiliated Hospital of Guizhou Medical University, No. 28, Guiyi Street, Yunyan District, Guiyang, 550004, Guizhou Province, China.
| |
Collapse
|
143
|
Exarchos K, Aggelopoulou A, Oikonomou A, Biniskou T, Beli V, Antoniadou E, Kostikas K. Review of Artificial Intelligence techniques in Chronic Obstructive Lung Disease. IEEE J Biomed Health Inform 2021; 26:2331-2338. [PMID: 34914601 DOI: 10.1109/jbhi.2021.3135838] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
BACKGROUND Artificial Intelligence (AI) has proven to be an invaluable asset in the healthcare domain, where massive amounts of data are produced. Chronic Obstructive Pulmonary Disease (COPD) is a heterogeneous chronic condition with multiscale manifestations and complex interactions that represents an ideal target for AI. OBJECTIVE The aim of this review article is to appraise the adoption of AI in COPD research, and more specifically its applications to date along with reported results, potential challenges and future prospects. METHODS We performed a review of the literature from PubMed and DBLP and assembled studies published up to 2020, yielding 156 articles relevant to the scope of this review. RESULTS The resulting articles were assessed and organized into four basic contextual categories, namely: i) COPD diagnosis, ii) COPD prognosis, iii) Patient classification, iv) COPD management, and subsequently presented in an orderly manner based on a set of qualitative and quantitative criteria. CONCLUSIONS We observed considerable acceleration of research activity utilizing AI techniques in COPD research, especially in the last couple of years, nevertheless, the massive production of large and complex data in COPD calls for broader adoption of AI and more advanced techniques.
Collapse
|
144
|
Ganitidis T, Athanasiou M, Dalakleidi K, Melanitis N, Golemati S, Nikita KS. Stratification of carotid atheromatous plaque using interpretable deep learning methods on B-mode ultrasound images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3902-3905. [PMID: 34892085 DOI: 10.1109/embc46164.2021.9630402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Carotid atherosclerosis is the major cause of ischemic stroke resulting in significant rates of mortality and disability annually. Early diagnosis of such cases is of great importance, since it enables clinicians to apply a more effective treatment strategy. This paper introduces an interpretable classification approach of carotid ultrasound images for the risk assessment and stratification of patients with carotid atheromatous plaque. To address the highly imbalanced distribution of patients between the symptomatic and asymptomatic classes (16 vs 58, respectively), an ensemble learning scheme based on a sub-sampling approach was applied along with a two-phase, cost-sensitive strategy of learning, that uses the original and a resampled data set. Convolutional Neural Networks (CNNs) were utilized for building the primary models of the ensemble. A six-layer deep CNN was used to automatically extract features from the images, followed by a classification stage of two fully connected layers. The obtained results (Area Under the ROC Curve (AUC): 73%, sensitivity: 75%, specificity: 70%) indicate that the proposed approach achieved acceptable discrimination performance. Finally, interpretability methods were applied on the model's predictions in order to reveal insights on the model's decision process as well as to enable the identification of novel image biomarkers for the stratification of patients with carotid atheromatous plaque.Clinical Relevance-The integration of interpretability methods with deep learning strategies can facilitate the identification of novel ultrasound image biomarkers for the stratification of patients with carotid atheromatous plaque.
Collapse
|
145
|
Parashar G, Chaudhary A, Rana A. Systematic Mapping Study of AI/Machine Learning in Healthcare and Future Directions. SN COMPUTER SCIENCE 2021; 2:461. [PMID: 34549197 PMCID: PMC8444522 DOI: 10.1007/s42979-021-00848-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 09/01/2021] [Indexed: 12/22/2022]
Abstract
This study attempts to categorise research conducted in the area of: use of machine learning in healthcare, using a systematic mapping study methodology. In our attempt, we reviewed literature from top journals, articles, and conference papers by using the keywords use of machine learning in healthcare. We queried Google Scholar, resulted in 1400 papers, and then categorised the results on the basis of the objective of the study, the methodology adopted, type of problem attempted and disease studied. As a result we were able to categorize study in five different categories namely, interpretable ML, evaluation of medical images, processing of EHR, security/privacy framework, and transfer learning. In the study we also found that most of the authors have studied cancer, and one of the least studied disease was epilepsy, evaluation of medical images is the most researched and a new field of research, Interpretable ML/Explainable AI, is gaining momentum. Our basic intent is to provide a fair idea to future researchers about the field and future directions.
Collapse
Affiliation(s)
| | | | - Ajay Rana
- AIIT, AMITY University, Noida, Uttar Pradesh, India
| |
Collapse
|
146
|
Abstract
Early screening of COVID-19 is essential for pandemic control, and thus to relieve stress on the health care system. Lung segmentation from chest X-ray (CXR) is a promising method for early diagnoses of pulmonary diseases. Recently, deep learning has achieved great success in supervised lung segmentation. However, how to effectively utilize the lung region in screening COVID-19 still remains a challenge due to domain shift and lack of manual pixel-level annotations. We hereby propose a multi-appearance COVID-19 screening framework by using lung region priors derived from CXR images. Firstly, we propose a multi-scale adversarial domain adaptation network (MS-AdaNet) to boost the cross-domain lung segmentation task as the prior knowledge to the classification network. Then, we construct a multi-appearance network (MA-Net), which is composed of three sub-networks to realize multi-appearance feature extraction and fusion using lung region priors. At last, we can obtain prediction results from normal, viral pneumonia, and COVID-19 using the proposed MA-Net. We extend the proposed MS-AdaNet for lung segmentation task on three different public CXR datasets. The results suggest that the MS-AdaNet outperforms contrastive methods in cross-domain lung segmentation. Moreover, experiments reveal that the proposed MA-Net achieves accuracy of 98.83% and F1-score of 98.71% on COVID-19 screening. The results indicate that the proposed MA-Net can obtain significant performance on COVID-19 screening.
Collapse
|
147
|
Hsu W, Baumgartner C, Deserno TM. Notable Papers and New Directions in Sensors, Signals, and Imaging Informatics. Yearb Med Inform 2021; 30:150-158. [PMID: 34479386 PMCID: PMC8416210 DOI: 10.1055/s-0041-1726526] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
OBJECTIVE To identify and highlight research papers representing noteworthy developments in signals, sensors, and imaging informatics in 2020. METHOD A broad literature search was conducted on PubMed and Scopus databases. We combined Medical Subject Heading (MeSH) terms and keywords to construct particular queries for sensors, signals, and image informatics. We only considered papers that have been published in journals providing at least three articles in the query response. Section editors then independently reviewed the titles and abstracts of preselected papers assessed on a three-point Likert scale. Papers were rated from 1 (do not include) to 3 (should be included) for each topical area (sensors, signals, and imaging informatics) and those with an average score of 2 or above were subsequently read and assessed again by two of the three co-editors. Finally, the top 14 papers with the highest combined scores were considered based on consensus. RESULTS The search for papers was executed in January 2021. After removing duplicates and conference proceedings, the query returned a set of 101, 193, and 529 papers for sensors, signals, and imaging informatics, respectively. We filtered out journals that had less than three papers in the query results, reducing the number of papers to 41, 117, and 333, respectively. From these, the co-editors identified 22 candidate papers with more than 2 Likert points on average, from which 14 candidate best papers were nominated after intensive discussion. At least five external reviewers then rated the remaining papers. The four finalist papers were found using the composite rating of all external reviewers. These best papers were approved by consensus of the International Medical Informatics Association (IMIA) Yearbook editorial board. CONCLUSIONS Sensors, signals, and imaging informatics is a dynamic field of intense research. The four best papers represent advanced approaches for combining, processing, modeling, and analyzing heterogeneous sensor and imaging data. The selected papers demonstrate the combination and fusion of multiple sensors and sensor networks using electrocardiogram (ECG), electroencephalogram (EEG), or photoplethysmogram (PPG) with advanced data processing, deep and machine learning techniques, and present image processing modalities beyond state-of-the-art that significantly support and further improve medical decision making.
Collapse
Affiliation(s)
- William Hsu
- Medical & Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine at UCLA, United States of America
| | - Christian Baumgartner
- Institute of Health Care Engineering with European Testing Center of Medical Devices, Graz University of Technology, Austria
| | - Thomas M. Deserno
- Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, Braunschweig, Germany
| | | |
Collapse
|
148
|
Wen H, Zheng W, Li M, Li Q, Liu Q, Zhou J, Liu Z, Chen X. Multiparametric Quantitative US Examination of Liver Fibrosis: A Feature-engineering and Machine-learning Based Analysis. IEEE J Biomed Health Inform 2021; 26:715-726. [PMID: 34329172 DOI: 10.1109/jbhi.2021.3100319] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Quantitative ultrasound (QUS), which is commonly used to extract quantitative features from the ultrasound radiofrequency (RF) data or the RF envelope signals for tissue characterization, is becoming a promising technique for noninvasive assessments of liver fibrosis. However, the number of feature variables examined and finally used in the existing QUS methods is typically small, to some extent limiting the diagnostic performance. Therefore, this paper devises a new multiparametric QUS (MP-QUS) method which enables the extraction of a large number of feature variables from US RF signals and allows for the use of feature-engineering and machinelearning based algorithms for liver fibrosis assessment. In the MP-QUS, eighty-four feature variables were extracted from multiple QUS parametric maps derived from the RF signals and the envelope data. Afterwards, feature reduction and selection were performed in turn to remove the feature redundancy and identify the best combination of features in the reduced feature set. Finally, a variety of machine-learning algorithms were tested for classifying liver fibrosis with the selected features, based on the results of which the optimal classifier was established and used for final classification. The performance of the proposed MPQUS method for staging liver fibrosis was evaluated on an animal model, with histologic examination as the reference standard. The mean accuracy, sensitivity, specificity and area under the receiver-operating-characteristic curve achieved by MP-QUS are respectively 83.38%, 86.04%, 80.82% and 0.891 for recognizing significant liver fibrosis, and 85.50%, 88.92%, 85.24% and 0.924 for diagnosing liver cirrhosis. The proposed MP-QUS method paves a way for its future extension to assess liver fibrosis in human subjects.
Collapse
|
149
|
Ji J, Chen Z, Yang C. Convolutional Neural Network with Sparse Strategies to Classify Dynamic Functional Connectivity. IEEE J Biomed Health Inform 2021; 26:1219-1228. [PMID: 34314368 DOI: 10.1109/jbhi.2021.3100559] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Classification of dynamic functional connectivity (DFC) is becoming a promising approach for diagnosing various neurodegenerative diseases. However, the existing methods generally face the problem of overfitting. To solve it, this paper proposes a convolutional neural network with three sparse strategies named SCNN to classify DFC. Firstly, an element-wise filter is designed to impose sparse constraints on the DFC matrix by replacing the redundant elements with zeroes, where the DFC matrix is specially constructed to quantify the spatial and temporal variation of DFC. Secondly, a 11 convolutional filter is adopted to reduce the dimensionality of the sparse DFC matrix, and remove meaningless features resulted from zero elements in the subsequent convolution process. Finally, an extra sparse optimization classifier is employed to optimize the parameters of the above two filters, which can effectively improve the ability of SCNN to extract discriminative features. Experimental results on multiple resting-state fMRI datasets demonstrate that the proposed model provides a better classification performance of DFC compared with several state-of-the-art methods, and can identify the abnormal brain functional connectivity.
Collapse
|
150
|
Sen B, Cullen KR, Parhi KK. Classification of Adolescent Major Depressive Disorder Via Static and Dynamic Connectivity. IEEE J Biomed Health Inform 2021; 25:2604-2614. [PMID: 33296316 DOI: 10.1109/jbhi.2020.3043427] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
This paper introduces an approach for classifying adolescents suffering from MDD using resting-state fMRI. Accurate diagnosis of MDD involves interviews with adolescent patients and their parents, symptom rating scales based on Diagnostic and Statistical Manual of Mental Disorders (DSM), behavioral observation as well as the experience of a clinician. Discovering predictive biomarkers for diagnosing MDD patients using functional magnetic resonance imaging (fMRI) scans can assist the clinicians in their diagnostic assessments. This paper investigates various static and dynamic connectivity measures extracted from resting-state fMRI for assisting with MDD diagnosis. First, absolute Pearson correlation matrices from 85 brain regions are computed and they are used to calculate static features for predicting MDD. A predictive sub-network extracted using sub-graph entropy classifies adolescent MDD vs. typical healthy controls with high accuracy, sensitivity and specificity. Next, approaches utilizing dynamic connectivity are employed to extract tensor based, independent component based and principal component based subject specific attributes. Finally, features from static and dynamic approaches are combined to create a feature vector for classification. A leave-one-out cross-validation method is used for the final predictor performance. Out of 49 adolescents with MDD and 33 matched healthy controls, a support vector machine (SVM) classifier using a radial basis function (RBF) kernel using differential sub-graph entropy combined with dynamic connectivity features classifies MDD vs. healthy controls with an accuracy of 0.82 for leave-one-out cross-validation. This classifier has specificity and sensitivity of 0.79 and 0.84, respectively.
Collapse
|