1
|
Pan Y, Zhang Z, Xu T, Chen G. Development and validation of a graph convolutional network (GCN)-based automatic superimposition method for maxillary digital dental models (MDMs). Angle Orthod 2025; 95:259-265. [PMID: 39961328 PMCID: PMC12017548 DOI: 10.2319/071224-555.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2024] [Accepted: 12/22/2024] [Indexed: 04/16/2025] Open
Abstract
OBJECTIVES To validate the accuracy and reliability of a graph convolutional network (GCN)-based superimposition method of a maxillary digital dental model (MDM) by comparing it with manual superimposition and quantifying the clinical error from this method. MATERIALS AND METHODS Based on a GCN, learning the features from 100 three-dimensional digital occlusal models under supervision of the palatal stable structure labels that were manually annotated by senior specialists, the palatal stable structure was automatically segmented. The average Hausdorff distance was calculated to assess the difference between automatic and manual segmentations. Tooth position and angulation, including rotation, tip, and torque, of bilateral upper first molars and central incisors were obtained to measure the clinical error of automatic superimposition. Reliability was calculated by intraclass correlation coefficient (ICC). RESULTS The average Hausdorff distance was 0.36 mm between automatic and manual segmentations of the palatal stable region and was larger than the intraexaminer and interexaminer deviations. The tooth position deviation was <0.32 mm, and the tooth angulation difference was <0.26° for tip and torque, and 0.46-0.61° in rotation. ICCs, used for assessment of reliability, ranged from 0.82 to 0.99 in all variables. CONCLUSIONS The GCN-based MDM superimposition is an efficient method for the assessment of tooth movement in adults. The clinical error in tooth position and angulation induced by the method was clinically acceptable. Reliability was as high as manual segmentation.
Collapse
Affiliation(s)
| | | | | | - Gui Chen
- Corresponding author: Dr Gui Chen, Associate Professor, Department of Orthodontics, Peking University School and Hospital of Stomatology, No. 22 Zhongguancun S Ave, Haidian District, Beijing 100081, People’s Republic of China (e-mail: )
| |
Collapse
|
2
|
Yang Y, Fu H, Aviles-Rivero AI, Xing Z, Zhu L. DiffMIC-v2: Medical Image Classification via Improved Diffusion Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:2244-2255. [PMID: 40031019 DOI: 10.1109/tmi.2025.3530399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Recently, Denoising Diffusion Models have achieved outstanding success in generative image modeling and attracted significant attention in the computer vision community. Although a substantial amount of diffusion-based research has focused on generative tasks, few studies apply diffusion models to medical diagnosis. In this paper, we propose a diffusion-based network (named DiffMIC-v2) to address general medical image classification by eliminating unexpected noise and perturbations in image representations. To achieve this goal, we first devise an improved dual-conditional guidance strategy that conditions each diffusion step with multiple granularities to enhance step-wise regional attention. Furthermore, we design a novel Heterologous diffusion process that achieves efficient visual representation learning in the latent space. We evaluate the effectiveness of our DiffMIC-v2 on four medical classification tasks with different image modalities, including thoracic diseases classification on chest X-ray, placental maturity grading on ultrasound images, skin lesion classification using dermatoscopic images, and diabetic retinopathy grading using fundus images. Experimental results demonstrate that our DiffMIC-v2 outperforms state-of-the-art methods by a significant margin, which indicates the universality and effectiveness of the proposed model on multi-class and multi-label classification tasks. DiffMIC-v2 can use fewer iterations than our previous DiffMIC to obtain accurate estimations, and also achieves greater runtime efficiency with superior results. The code will be publicly available at https://github.com/scott-yjyang/DiffMICv2.
Collapse
|
3
|
Rehman M, Shafi I, Ahmad J, Garcia CO, Barrera AEP, Ashraf I. Advancement in medical report generation: current practices, challenges, and future directions. Med Biol Eng Comput 2025; 63:1249-1270. [PMID: 39707049 DOI: 10.1007/s11517-024-03265-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2024] [Accepted: 10/28/2024] [Indexed: 12/23/2024]
Abstract
The correct analysis of medical images requires the medical knowledge and expertise of radiologists to understand, clarify, and explain complex patterns and diagnose diseases. After analyzing, radiologists write detailed and well-structured reports that contribute to the precise and timely diagnosis of patients. However, manually writing reports is often expensive and time-consuming, and it is difficult for radiologists to analyze medical images, particularly images with multiple views and perceptions. It is challenging to accurately diagnose diseases, and many methods are proposed to help radiologists, both traditional and deep learning-based. Automatic report generation is widely used to tackle this issue as it streamlines the process and lessens the burden of manual labeling of images. This paper introduces a systematic literature review with a focus on analyses and evaluating existing research on medical report generation. This SLR follows a proper protocol for the planning, reviewing, and reporting of the results. This review recognizes that the most commonly used deep learning models are encoder-decoder frameworks (45 articles), which provide an accuracy of around 92-95%. Transformers-based models (20 articles) are the second most established method and achieve an accuracy of around 91%. The remaining articles explored in this SLR are attention mechanisms (10), RNN-LSTM (10), Large language models (LLM-10), and graph-based methods (4) with promising results. However, these methods also face certain limitations such as overfitting, risk of bias, and high data dependency that impact their performance. The review not only highlights the strengths and challenges of these methods but also suggests ways to handle them in the future to increase the accuracy and timely generation of medical reports. The goal of this review is to direct radiologists toward methods that lessen their workload and provide precise medical diagnoses.
Collapse
Affiliation(s)
- Marwareed Rehman
- College of Electrical and Mechanical Engineering, National University of Sciences and Technology (NUST), Islamabad, 44000, Pakistan
| | - Imran Shafi
- College of Electrical and Mechanical Engineering, National University of Sciences and Technology (NUST), Islamabad, 44000, Pakistan
| | - Jamil Ahmad
- Department of Computing, Abasyn University, Islamabad Campus, Islamabad, 44000, Pakistan
| | - Carlos Osorio Garcia
- Universidad Europea del Atlantico, Isabel Torres 21, Santander, 39011, Spain
- Universidad Internacional Iberoamericana, Campeche, 24560, Mexico
- Universidade Internacional do Cuanza, Cuito, Bie, Angola
| | - Alina Eugenia Pascual Barrera
- Universidad Europea del Atlantico, Isabel Torres 21, Santander, 39011, Spain
- Universidad Internacional Iberoamericana, Campeche, 24560, Mexico
- Universidad de La Romana, La Romana, Dominican Republic
| | - Imran Ashraf
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan, 38541, Republic of Korea.
| |
Collapse
|
4
|
Patel AN, Srinivasan K. Deep learning paradigms in lung cancer diagnosis: A methodological review, open challenges, and future directions. Phys Med 2025; 131:104914. [PMID: 39938402 DOI: 10.1016/j.ejmp.2025.104914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Revised: 12/19/2024] [Accepted: 01/30/2025] [Indexed: 02/14/2025] Open
Abstract
Lung cancer is the leading cause of global cancer-related deaths, which emphasizes the critical importance of early diagnosis in enhancing patient outcomes. Deep learning has demonstrated significant promise in lung cancer diagnosis, excelling in nodule detection, classification, and prognosis prediction. This methodological review comprehensively explores deep learning models' application in lung cancer diagnosis, uncovering their integration across various imaging modalities. Deep learning consistently achieves state-of-the-art performance, occasionally surpassing human expert accuracy. Notably, deep neural networks excel in detecting lung nodules, distinguishing between benign and malignant nodules, and predicting patient prognosis. They have also led to the development of computer-aided diagnosis systems, enhancing diagnostic accuracy for radiologists. This review follows the specified criteria for article selection outlined by PRISMA framework. Despite challenges such as data quality and interpretability limitations, this review emphasizes the potential of deep learning to significantly improve the precision and efficiency of lung cancer diagnosis, facilitating continued research efforts to overcome these obstacles and fully harness neural network's transformative impact in this field.
Collapse
Affiliation(s)
- Aryan Nikul Patel
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, India.
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, India.
| |
Collapse
|
5
|
Yang W, YOSHIDA S, Zhao J, Wu W, Qiang Y. MVASA-HGN: multi-view adaptive semantic-aware heterogeneous graph network for KRAS mutation status prediction. Quant Imaging Med Surg 2025; 15:1190-1211. [PMID: 39995744 PMCID: PMC11847186 DOI: 10.21037/qims-24-1370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Accepted: 11/20/2024] [Indexed: 02/26/2025]
Abstract
Background In the treatment of advanced non-small cell lung cancer (NSCLC), the mutation status of the Kirsten rat sarcoma virus oncogene homolog (KRAS) gene has been shown to be a key factor affecting the efficacy of immune checkpoint inhibitors (ICIs), which is an important guideline for physicians to develop personalized treatment strategies. However, existing mutation prediction studies have primarily focused on the feature representation of individual patient medical data, ignoring the complex semantic relationships among patients in diverse clinical features. This study aimed to accurately identify KRAS gene status, which will not only assist physicians in accurately screening the patient population most likely to benefit from immunotherapy, but also reduce patient burden by avoiding unnecessary treatment attempts. Methods A multi-view adaptive semantics-aware heterogeneous graph framework (MVASA-HGN) based on multimodal medical data was developed to accurately predict KRAS mutation status in NSCLC patients. The framework first parses the relational semantics through clinical feature clustering and constructs a heterogeneous graph by combining computed tomography (CT) image and clinical features. In the second step, the heterogeneous graph is split into relational subgraphs under multiple views, and the node representations are constructed and updated gradually through a two-stage strategy of single-view graph representation learning and multi-view heterogeneous information fusion. In the single-view phase, we enhance the node self-embedding and construct the adjacency embedding of neighbors with the same type of relationship to ensure that the relational subgraph under each semantic preserves the complete local structure. Two attention mechanisms are introduced in the multi-view fusion phase to capture the enriched semantics preserved in nodes and heterogeneous relations, respectively. Finally, a comprehensive node representation is obtained through adaptive aggregation of different view neighborhood information and enhanced node embedding without predefined meta-paths. Results The classification results were evaluated on cooperative hospitals and The Cancer Imaging Archive (TCIA) datasets, and ablation experiments and comparison experiments were performed on the components of the framework, while exploring the framework's rationality and interpretability. Accuracy reached 85.29% and specificity reached 89.67% on the test set, indicating that our framework has significant advantages in deeply modeling complex heterogeneous semantics in local structures and fully exploiting and utilizing the rich semantic information preserved in heterogeneous relationships. The source code of MVASA-HGN is available at https://github.com/Yangwanter37/MVASA-HGN. Conclusions Our proposed MVASA-HGN framework provides a new perspective for multimodal information fusion and creates a new avenue to explore the potential link between images and genes, and the framework provides a non-invasive and cost-effective solution for identifying KRAS mutation status, which has a broad application prospect.
Collapse
Affiliation(s)
- Wanting Yang
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, China
| | - Shinichi YOSHIDA
- School of Informatics, Kochi University of Technology, Kochi, Japan
| | - Juanjuan Zhao
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, China
- College of Software, Taiyuan University of Technology, Taiyuan, China
- Jinzhong College of Information, Taiyuan, China
| | - Wei Wu
- Department of Clinical Laboratory, Shanxi Provincial People’s Hospital, Taiyuan, China
| | - Yan Qiang
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, China
- School of Software, North University of China, Taiyuan, China
| |
Collapse
|
6
|
Zhang M, Hu X, Gu L, Liu L, Kobayashi K, Harada T, Yan Y, Summers RM, Zhu Y. A New Benchmark: Clinical Uncertainty and Severity Aware Labeled Chest X-Ray Images With Multi-Relationship Graph Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:338-347. [PMID: 39120990 DOI: 10.1109/tmi.2024.3441494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/11/2024]
Abstract
Chest radiography, commonly known as CXR, is frequently utilized in clinical settings to detect cardiopulmonary conditions. However, even seasoned radiologists might offer different evaluations regarding the seriousness and uncertainty associated with observed abnormalities. Previous research has attempted to utilize clinical notes to extract abnormal labels for training deep-learning models in CXR image diagnosis. However, these methods often neglected the varying degrees of severity and uncertainty linked to different labels. In our study, we initially assembled a comprehensive new dataset of CXR images based on clinical textual data, which incorporated radiologists' assessments of uncertainty and severity. Using this dataset, we introduced a multi-relationship graph learning framework that leverages spatial and semantic relationships while addressing expert uncertainty through a dedicated loss function. Our research showcases a notable enhancement in CXR image diagnosis and the interpretability of the diagnostic model, surpassing existing state-of-the-art methodologies. The dataset address of disease severity and uncertainty we extracted is: https://physionet.org/content/cad-chest/1.0/.
Collapse
|
7
|
Yuan H, Hong C, Tran NTA, Xu X, Liu N. Leveraging anatomical constraints with uncertainty for pneumothorax segmentation. HEALTH CARE SCIENCE 2024; 3:456-474. [PMID: 39735285 PMCID: PMC11671217 DOI: 10.1002/hcs2.119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 09/01/2024] [Accepted: 09/19/2024] [Indexed: 12/31/2024]
Abstract
Background Pneumothorax is a medical emergency caused by the abnormal accumulation of air in the pleural space-the potential space between the lungs and chest wall. On 2D chest radiographs, pneumothorax occurs within the thoracic cavity and outside of the mediastinum, and we refer to this area as "lung + space." While deep learning (DL) has increasingly been utilized to segment pneumothorax lesions in chest radiographs, many existing DL models employ an end-to-end approach. These models directly map chest radiographs to clinician-annotated lesion areas, often neglecting the vital domain knowledge that pneumothorax is inherently location-sensitive. Methods We propose a novel approach that incorporates the lung + space as a constraint during DL model training for pneumothorax segmentation on 2D chest radiographs. To circumvent the need for additional annotations and to prevent potential label leakage on the target task, our method utilizes external datasets and an auxiliary task of lung segmentation. This approach generates a specific constraint of lung + space for each chest radiograph. Furthermore, we have incorporated a discriminator to eliminate unreliable constraints caused by the domain shift between the auxiliary and target datasets. Results Our results demonstrated considerable improvements, with average performance gains of 4.6%, 3.6%, and 3.3% regarding intersection over union, dice similarity coefficient, and Hausdorff distance. These results were consistent across six baseline models built on three architectures (U-Net, LinkNet, or PSPNet) and two backbones (VGG-11 or MobileOne-S0). We further conducted an ablation study to evaluate the contribution of each component in the proposed method and undertook several robustness studies on hyper-parameter selection to validate the stability of our method. Conclusions The integration of domain knowledge in DL models for medical applications has often been underemphasized. Our research underscores the significance of incorporating medical domain knowledge about the location-specific nature of pneumothorax to enhance DL-based lesion segmentation and further bolster clinicians' trust in DL tools. Beyond pneumothorax, our approach is promising for other thoracic conditions that possess location-relevant characteristics.
Collapse
Affiliation(s)
- Han Yuan
- Centre for Quantitative Medicine, Duke‐NUS Medical SchoolSingapore
| | - Chuan Hong
- Department of Biostatistics and BioinformaticsDuke UniversityDurhamNorth CarolinaUSA
| | | | - Xinxing Xu
- Institute of High Performance Computing, Agency for Science, Technology and ResearchSingapore
| | - Nan Liu
- Centre for Quantitative Medicine, Duke‐NUS Medical SchoolSingapore
- Programme in Health Services and Systems Research, Duke‐NUS Medical SchoolSingapore
- Institute of Data ScienceNational University of SingaporeSingapore
| |
Collapse
|
8
|
Kamalakannan N, Macharla SR, Kanimozhi M, Sudhakar MS. Exponential Pixelating Integral transform with dual fractal features for enhanced chest X-ray abnormality detection. Comput Biol Med 2024; 182:109093. [PMID: 39232407 DOI: 10.1016/j.compbiomed.2024.109093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 08/25/2024] [Accepted: 08/29/2024] [Indexed: 09/06/2024]
Abstract
The heightened prevalence of respiratory disorders, particularly exacerbated by a significant upswing in fatalities due to the novel coronavirus, underscores the critical need for early detection and timely intervention. This imperative is paramount, possessing the potential to profoundly impact and safeguard numerous lives. Medically, chest radiography stands out as an essential and economically viable medical imaging approach for diagnosing and assessing the severity of diverse Respiratory Disorders. However, their detection in Chest X-Rays is a cumbersome task even for well-trained radiologists owing to low contrast issues, overlapping of the tissue structures, subjective variability, and the presence of noise. To address these issues, a novel analytical model termed Exponential Pixelating Integral is introduced for the automatic detection of infections in Chest X-Rays in this work. Initially, the presented Exponential Pixelating Integral enhances the pixel intensities to overcome the low-contrast issues that are then polar-transformed followed by their representation using the locally invariant Mandelbrot and Julia fractal geometries for effective distinction of structural features. The collated features labeled Exponential Pixelating Integral with dually characterized fractal features are then classified by the non-parametric multivariate adaptive regression splines to establish an ensemble model between each pair of classes for effective diagnosis of diverse diseases. Rigorous analysis of the proposed classification framework on large medical benchmarked datasets showcases its superiority over its peers by registering a higher classification accuracy and F1 scores ranging from 98.46 to 99.45 % and 96.53-98.10 % respectively, making it a precise and interpretable automated system for diagnosing respiratory disorders.
Collapse
Affiliation(s)
| | | | - M Kanimozhi
- School of Electrical & Electronics, Sathyabama Institute of Science and Technology, Chennai, Tamilnadu, India
| | - M S Sudhakar
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, India.
| |
Collapse
|
9
|
Yung KW, Sivaraj J, De Coppi P, Stoyanov D, Loukogeorgakis S, Mazomenos EB. Diagnosing Necrotizing Enterocolitis via Fine-Grained Visual Classification. IEEE Trans Biomed Eng 2024; 71:3160-3169. [PMID: 39453790 DOI: 10.1109/tbme.2024.3409642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2024]
Abstract
Necrotizing Enterocolitis (NEC) is a devastating condition affecting prematurely born neonates. Reviewing Abdominal X-rays (AXRs) is a key step in NEC diagnosis, staging and treatment decision-making, but poses significant challenges due to the subtle, difficult-to-identify radiological signs of the disease. In this paper, we propose AIDNEC - AI Diagnosis of NECrotizing enterocolitis, a deep learning method to automatically detect and stratify the severity (surgical or medical) of NEC from no pathology in AXRs. The model is trainable end-to-end and integrates a Detection Transformer and Graph Convolution modules for localizing discriminative areas in AXRs, used to formulate subtle local embeddings. These are then combined with global image features to perform Fine-Grained Visual Classification (FGVC). We evaluate AIDNEC on our GOSH NEC dataset of 1153 images from 334 patients, achieving 79.7% accuracy in classifying NEC against No Pathology. AIDNEC outperforms the backbone by 2.6%, FGVC models by 2.5% and CheXNet by 4.2%, with statistically significant (two-tailed p 0.05) improvements, while providing meaningful discriminative regions to support the classification decision. Additional validation in the publicly available Chest X-ray14 dataset yields comparable performance to state-of-the-art methods, illustrating AIDNEC's robustness in a different X-ray classification task.
Collapse
|
10
|
Yoon HJ, Klasky HB, Blanchard AE, Christian JB, Durbin EB, Wu XC, Stroup A, Doherty J, Coyle L, Penberthy L, Tourassi GD. Development of message passing-based graph convolutional networks for classifying cancer pathology reports. BMC Med Inform Decis Mak 2024; 24:262. [PMID: 39289714 PMCID: PMC11409592 DOI: 10.1186/s12911-024-02662-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 08/30/2024] [Indexed: 09/19/2024] Open
Abstract
BACKGROUND Applying graph convolutional networks (GCN) to the classification of free-form natural language texts leveraged by graph-of-words features (TextGCN) was studied and confirmed to be an effective means of describing complex natural language texts. However, the text classification models based on the TextGCN possess weaknesses in terms of memory consumption and model dissemination and distribution. In this paper, we present a fast message passing network (FastMPN), implementing a GCN with message passing architecture that provides versatility and flexibility by allowing trainable node embedding and edge weights, helping the GCN model find the better solution. We applied the FastMPN model to the task of clinical information extraction from cancer pathology reports, extracting the following six properties: main site, subsite, laterality, histology, behavior, and grade. RESULTS We evaluated the clinical task performance of the FastMPN models in terms of micro- and macro-averaged F1 scores. A comparison was performed with the multi-task convolutional neural network (MT-CNN) model. Results show that the FastMPN model is equivalent to or better than the MT-CNN. CONCLUSIONS Our implementation revealed that our FastMPN model, which is based on the PyTorch platform, can train a large corpus (667,290 training samples) with 202,373 unique words in less than 3 minutes per epoch using one NVIDIA V100 hardware accelerator. Our experiments demonstrated that using this implementation, the clinical task performance scores of information extraction related to tumors from cancer pathology reports were highly competitive.
Collapse
Affiliation(s)
- Hong-Jun Yoon
- Computational Sciences and Engineering Division, Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, Tennessee, 37830, USA.
| | - Hilda B Klasky
- Computational Sciences and Engineering Division, Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, Tennessee, 37830, USA
| | - Andrew E Blanchard
- Computational Sciences and Engineering Division, Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, Tennessee, 37830, USA
| | - J Blair Christian
- Computational Sciences and Engineering Division, Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, Tennessee, 37830, USA
| | - Eric B Durbin
- College of Medicine, University of Kentucky, Lexington, Kentucky, 24105, USA
| | - Xiao-Cheng Wu
- Louisiana Tumor Registry, Louisiana State University Health Sciences Center, School of Public Health, New Orleans, Louisiana, 70112, USA
| | - Antoinette Stroup
- New Jersey State Cancer Registry, Rutgers Cancer Institute of New Jersey, New Brunswick, New Jersey, 08901, USA
| | - Jennifer Doherty
- Utah Cancer Registry, Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah, 84132, USA
| | - Linda Coyle
- Information Management Services, Inc., Calverton, Maryland, 20705, USA
| | - Lynne Penberthy
- Surveillance Research Program, Division of Cancer Control and Population Sciences National Cancer Institute, Bethesda, MD, 20814, USA
| | - Georgia D Tourassi
- National Center for Computational Sciences, Oak Ridge National Laboratory, Oak Ridge, Tennessee, 37830, USA
| |
Collapse
|
11
|
Luo Y, Mao C, Sanchez‐Pinto LN, Ahmad FS, Naidech A, Rasmussen L, Pacheco JA, Schneider D, Mithal LB, Dresden S, Holmes K, Carson M, Shah SJ, Khan S, Clare S, Wunderink RG, Liu H, Walunas T, Cooper L, Yue F, Wehbe F, Fang D, Liebovitz DM, Markl M, Michelson KN, McColley SA, Green M, Starren J, Ackermann RT, D'Aquila RT, Adams J, Lloyd‐Jones D, Chisholm RL, Kho A. Northwestern University resource and education development initiatives to advance collaborative artificial intelligence across the learning health system. Learn Health Syst 2024; 8:e10417. [PMID: 39036530 PMCID: PMC11257059 DOI: 10.1002/lrh2.10417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 02/22/2024] [Accepted: 02/26/2024] [Indexed: 07/23/2024] Open
Abstract
Introduction The rapid development of artificial intelligence (AI) in healthcare has exposed the unmet need for growing a multidisciplinary workforce that can collaborate effectively in the learning health systems. Maximizing the synergy among multiple teams is critical for Collaborative AI in Healthcare. Methods We have developed a series of data, tools, and educational resources for cultivating the next generation of multidisciplinary workforce for Collaborative AI in Healthcare. We built bulk-natural language processing pipelines to extract structured information from clinical notes and stored them in common data models. We developed multimodal AI/machine learning (ML) tools and tutorials to enrich the toolbox of the multidisciplinary workforce to analyze multimodal healthcare data. We have created a fertile ground to cross-pollinate clinicians and AI scientists and train the next generation of AI health workforce to collaborate effectively. Results Our work has democratized access to unstructured health information, AI/ML tools and resources for healthcare, and collaborative education resources. From 2017 to 2022, this has enabled studies in multiple clinical specialties resulting in 68 peer-reviewed publications. In 2022, our cross-discipline efforts converged and institutionalized into the Center for Collaborative AI in Healthcare. Conclusions Our Collaborative AI in Healthcare initiatives has created valuable educational and practical resources. They have enabled more clinicians, scientists, and hospital administrators to successfully apply AI methods in their daily research and practice, develop closer collaborations, and advanced the institution-level learning health system.
Collapse
Affiliation(s)
- Yuan Luo
- Northwestern University Clinical and Translational Sciences InstituteChicagoIllinoisUSA
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Division of Health and Biomedical Informatics, Department of Preventive MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Chengsheng Mao
- Northwestern University Clinical and Translational Sciences InstituteChicagoIllinoisUSA
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Division of Health and Biomedical Informatics, Department of Preventive MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Lazaro N. Sanchez‐Pinto
- Division of Health and Biomedical Informatics, Department of Preventive MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
- Division of Critical Care, Department of PediatricsNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
- Stanley Manne Children's Research InstituteAnn & Robert H. Lurie Children's Hospital of ChicagoChicagoIllinoisUSA
| | - Faraz S. Ahmad
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Division of Cardiology, Department of MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Andrew Naidech
- Northwestern University Clinical and Translational Sciences InstituteChicagoIllinoisUSA
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Division of Neurocritical Care, Department of NeurologyNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Luke Rasmussen
- Northwestern University Clinical and Translational Sciences InstituteChicagoIllinoisUSA
- Division of Health and Biomedical Informatics, Department of Preventive MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Jennifer A. Pacheco
- Center for Genetic MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Daniel Schneider
- Northwestern University Clinical and Translational Sciences InstituteChicagoIllinoisUSA
| | - Leena B. Mithal
- Stanley Manne Children's Research InstituteAnn & Robert H. Lurie Children's Hospital of ChicagoChicagoIllinoisUSA
- Division of Infectious Diseases, Department of PediatricsNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Scott Dresden
- Northwestern University Clinical and Translational Sciences InstituteChicagoIllinoisUSA
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Department of Emergency MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Kristi Holmes
- Northwestern University Clinical and Translational Sciences InstituteChicagoIllinoisUSA
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Division of Health and Biomedical Informatics, Department of Preventive MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
- Galter Health Sciences LibraryNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Matthew Carson
- Galter Health Sciences LibraryNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Sanjiv J. Shah
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Division of Cardiology, Department of MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Seema Khan
- Northwestern University Clinical and Translational Sciences InstituteChicagoIllinoisUSA
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Department of SurgeryNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Susan Clare
- Department of SurgeryNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Richard G. Wunderink
- Division of Critical Care, Department of PediatricsNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
- Pulmonary and Critical Care Division, Department of MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Huiping Liu
- Northwestern University Clinical and Translational Sciences InstituteChicagoIllinoisUSA
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Department of PharmacologyNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
- Division of Hematology and Oncology, Department of MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Theresa Walunas
- Division of Health and Biomedical Informatics, Department of Preventive MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
- Division of General Internal Medicine, Department of MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
- Center for Health Information PartnershipsInstitute for Public Health and Medicine, Northwestern UniversityChicagoIllinoisUSA
- Department of Microbiology‐ImmunologyNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Lee Cooper
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Division of Health and Biomedical Informatics, Department of Preventive MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
- Department of PathologyNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Feng Yue
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Department of PathologyNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
- Department of Biochemistry and Molecular GeneticsNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Firas Wehbe
- Northwestern University Clinical and Translational Sciences InstituteChicagoIllinoisUSA
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Department of SurgeryNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Deyu Fang
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Department of PathologyNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - David M. Liebovitz
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Division of Health and Biomedical Informatics, Department of Preventive MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
- Division of General Internal Medicine, Department of MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
- Center for Health Information PartnershipsInstitute for Public Health and Medicine, Northwestern UniversityChicagoIllinoisUSA
| | - Michael Markl
- Department of RadiologyNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Kelly N. Michelson
- Division of Critical Care, Department of PediatricsNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
- Stanley Manne Children's Research InstituteAnn & Robert H. Lurie Children's Hospital of ChicagoChicagoIllinoisUSA
- Center for Bioethics and Medical Humanities, Institute for Public Health and MedicineNorthwestern UniversityChicagoIllinoisUSA
| | - Susanna A. McColley
- Northwestern University Clinical and Translational Sciences InstituteChicagoIllinoisUSA
- Stanley Manne Children's Research InstituteAnn & Robert H. Lurie Children's Hospital of ChicagoChicagoIllinoisUSA
- Division of Pulmonary and Sleep Medicine, Department of PediatricsNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Marianne Green
- Division of General Internal Medicine, Department of MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Justin Starren
- Northwestern University Clinical and Translational Sciences InstituteChicagoIllinoisUSA
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Division of Health and Biomedical Informatics, Department of Preventive MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Ronald T. Ackermann
- Northwestern University Clinical and Translational Sciences InstituteChicagoIllinoisUSA
- Division of General Internal Medicine, Department of MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
- Institute for Public Health and MedicineNorthwestern UniversityChicagoIllinoisUSA
| | - Richard T. D'Aquila
- Northwestern University Clinical and Translational Sciences InstituteChicagoIllinoisUSA
- Division of Infectious Diseases, Department of MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - James Adams
- Northwestern University Clinical and Translational Sciences InstituteChicagoIllinoisUSA
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Department of Emergency MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Donald Lloyd‐Jones
- Northwestern University Clinical and Translational Sciences InstituteChicagoIllinoisUSA
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Division of Epidemiology, Department of Preventive MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
| | - Rex L. Chisholm
- Northwestern University Clinical and Translational Sciences InstituteChicagoIllinoisUSA
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Department of SurgeryNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
- Center for Health Information PartnershipsInstitute for Public Health and Medicine, Northwestern UniversityChicagoIllinoisUSA
| | - Abel Kho
- Northwestern University Clinical and Translational Sciences InstituteChicagoIllinoisUSA
- Institute for Augmented Intelligence in MedicineNorthwestern UniversityChicagoIllinoisUSA
- Division of Health and Biomedical Informatics, Department of Preventive MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
- Division of General Internal Medicine, Department of MedicineNorthwestern University Feinberg School of MedicineChicagoIllinoisUSA
- Center for Health Information PartnershipsInstitute for Public Health and Medicine, Northwestern UniversityChicagoIllinoisUSA
| |
Collapse
|
12
|
Gao X, Jiang B, Wang X, Huang L, Tu Z. Chest x-ray diagnosis via spatial-channel high-order attention representation learning. Phys Med Biol 2024; 69:045026. [PMID: 38347732 DOI: 10.1088/1361-6560/ad2014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 01/18/2024] [Indexed: 02/15/2024]
Abstract
Objective. Chest x-ray image representation and learning is an important problem in computer-aided diagnostic area. Existing methods usually adopt CNN or Transformers for feature representation learning and focus on learning effective representations for chest x-ray images. Although good performance can be obtained, however, these works are still limited mainly due to the ignorance of mining the correlations of channels and pay little attention on the local context-aware feature representation of chest x-ray image.Approach. To address these problems, in this paper, we propose a novel spatial-channel high-order attention model (SCHA) for chest x-ray image representation and diagnosis. The proposed network architecture mainly contains three modules, i.e. CEBN, SHAM and CHAM. To be specific, firstly, we introduce a context-enhanced backbone network by employing multi-head self-attention to extract initial features for the input chest x-ray images. Then, we develop a novel SCHA which contains both spatial and channel high-order attention learning branches. For the spatial branch, we develop a novel local biased self-attention mechanism which can capture both local and long-range global dependences of positions to learn rich context-aware representation. For the channel branch, we employ Brownian Distance Covariance to encode the correlation information of channels and regard it as the image representation. Finally, the two learning branches are integrated together for the final multi-label diagnosis classification and prediction.Main results. Experiments on the commonly used datasets including ChestX-ray14 and CheXpert demonstrate that our proposed SCHA approach can obtain better performance when comparing many related approaches.Significance. This study obtains a more discriminative method for chest x-ray classification and provides a technique for computer-aided diagnosis.
Collapse
Affiliation(s)
- Xinyue Gao
- The School of Computer Science and Technology, Anhui University, Hefei 230601, People's Republic of China
| | - Bo Jiang
- The School of Computer Science and Technology, Anhui University, Hefei 230601, People's Republic of China
| | - Xixi Wang
- The School of Computer Science and Technology, Anhui University, Hefei 230601, People's Republic of China
| | - Lili Huang
- The School of Computer Science and Technology, Anhui University, Hefei 230601, People's Republic of China
| | - Zhengzheng Tu
- The School of Computer Science and Technology, Anhui University, Hefei 230601, People's Republic of China
| |
Collapse
|
13
|
Wu Q, Wang J, Sun Z, Xiao L, Ying W, Shi J. Immunotherapy Efficacy Prediction for Non-Small Cell Lung Cancer Using Multi-View Adaptive Weighted Graph Convolutional Networks. IEEE J Biomed Health Inform 2023; 27:5564-5575. [PMID: 37643107 DOI: 10.1109/jbhi.2023.3309840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
Immunotherapy is an effective way to treat non-small cell lung cancer (NSCLC). The efficacy of immunotherapy differs from person to person and may cause side effects, making it important to predict the efficacy of immunotherapy before surgery. Radiomics based on machine learning has been successfully used to predict the efficacy of NSCLC immunotherapy. However, most studies only considered the radiomic features of the individual patient, ignoring the inter-patient correlations. Besides, they usually concatenated different features as the input of a single-view model, failing to consider the complex correlation among features of multiple types. To this end, we propose a multi-view adaptive weighted graph convolutional network (MVAW-GCN) for the prediction of NSCLC immunotherapy efficacy. Specifically, we group the radiomic features into several views according to the type of the fitered images they extracted from. We construct a graph in each view based on the radiomic features and phenotypic information. An attention mechanism is introduced to automatically assign weights to each view. Considering the view-shared and view-specific knowledge of radiomic features, we propose separable graph convolution that decomposes the output of the last convolution layer into two components, i.e., the view-shared and view-specific outputs. We maximize the consistency and enhance the diversity among different views in the learning procedure. The proposed MVAW-GCN is evaluated on 107 NSCLC patients, including 52 patients with valid efficacy and 55 patients with invalid efficacy. Our method achieved an accuracy of 77.27% and an area under the curve (AUC) of 0.7780, indicating its effectiveness in NSCLC immunotherapy efficacy prediction.
Collapse
|
14
|
Nie W, Zhang C, Song D, Zhao L, Bai Y, Xie K, Liu A. Deep reinforcement learning framework for thoracic diseases classification via prior knowledge guidance. Comput Med Imaging Graph 2023; 108:102277. [PMID: 37567045 DOI: 10.1016/j.compmedimag.2023.102277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 07/25/2023] [Accepted: 07/26/2023] [Indexed: 08/13/2023]
Abstract
The chest X-ray is commonly employed in the diagnosis of thoracic diseases. Over the years, numerous approaches have been proposed to address the issue of automatic diagnosis based on chest X-rays. However, the limited availability of labeled data for related diseases remains a significant challenge in achieving accurate diagnoses. This paper focuses on the diagnostic problem of thorax diseases and presents a novel deep reinforcement learning framework. This framework incorporates prior knowledge to guide the learning process of diagnostic agents, and the model parameters can be continually updated as more data becomes available, mimicking a person's learning process. Specifically, our approach offers two key contributions: (1) prior knowledge can be acquired from pre-trained models using old data or similar data from other domains, effectively reducing the dependence on target domain data; and (2) the reinforcement learning framework enables the diagnostic agent to be as exploratory as a human, leading to improved diagnostic accuracy through continuous exploration. Moreover, this method effectively addresses the challenge of learning models with limited data, enhancing the model's generalization capability. We evaluate the performance of our approach using the well-known NIH ChestX-ray 14 and CheXpert datasets, and achieve competitive results. More importantly, in clinical application, we make considerable progress. The source code for our approach can be accessed at the following URL: https://github.com/NeaseZ/MARL.
Collapse
Affiliation(s)
- Weizhi Nie
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
| | - Chen Zhang
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
| | - Dan Song
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China.
| | - Lina Zhao
- Department of Critical Care Medicine, Tianjin Medical University General Hospital, Tianjin 300052, China
| | - Yunpeng Bai
- Department of Cardiac Surgery, Chest Hospital, Tianjin University, Tianjin 300222, China; Clinical School of Thoracic, Tianjin Medical University, Tianjin 300052, China
| | - Keliang Xie
- Department of Critical Care Medicine, Tianjin Medical University General Hospital, Tianjin 300052, China; Department of Anesthesiology, Tianjin Medical University General Hospital, Tianjin 300052, China; Tianjin Institute of Anesthesiology, Tianjin 300052, China
| | - Anan Liu
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
| |
Collapse
|
15
|
Liu Z, Cheng Y, Tamura S. Multi-Label Local to Global Learning: A Novel Learning Paradigm for Chest X-Ray Abnormality Classification. IEEE J Biomed Health Inform 2023; 27:4409-4420. [PMID: 37252867 DOI: 10.1109/jbhi.2023.3281466] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Deep neural network (DNN) approaches have shown remarkable progress in automatic Chest X-rays classification. However, existing methods use a training scheme that simultaneously trains all abnormalities without considering their learning priority. Inspired by the clinical practice of radiologists progressively recognizing more abnormalities and the observation that existing curriculum learning (CL) methods based on image difficulty may not be suitable for disease diagnosis, we propose a novel CL paradigm, named multi-label local to global (ML-LGL). This approach iteratively trains DNN models on gradually increasing abnormalities within the dataset, i,e, from fewer abnormalities (local) to more ones (global). At each iteration, we first build the local category by adding high-priority abnormalities for training, and the abnormality's priority is determined by our three proposed clinical knowledge-leveraged selection functions. Then, images containing abnormalities in the local category are gathered to form a new training set. The model is lastly trained on this set using a dynamic loss. Additionally, we demonstrate the superiority of ML-LGL from the perspective of the model's initial stability during training. Experimental results on three open-source datasets, PLCO, ChestX-ray14 and CheXpert show that our proposed learning paradigm outperforms baselines and achieves comparable results to state-of-the-art methods. The improved performance promises potential applications in multi-label Chest X-ray classification.
Collapse
|
16
|
Ni H, Zhou G, Chen X, Ren J, Yang M, Zhang Y, Zhang Q, Zhang L, Mao C, Li X. Predicting Recurrence in Pancreatic Ductal Adenocarcinoma after Radical Surgery Using an AX-Unet Pancreas Segmentation Model and Dynamic Nomogram. Bioengineering (Basel) 2023; 10:828. [PMID: 37508855 PMCID: PMC10376503 DOI: 10.3390/bioengineering10070828] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 07/01/2023] [Accepted: 07/06/2023] [Indexed: 07/30/2023] Open
Abstract
This study aims to investigate the reliability of radiomic features extracted from contrast-enhanced computer tomography (CT) by AX-Unet, a pancreas segmentation model, to analyse the recurrence of pancreatic ductal adenocarcinoma (PDAC) after radical surgery. In this study, we trained an AX-Unet model to extract the radiomic features from preoperative contrast-enhanced CT images on a training set of 205 PDAC patients. Then we evaluated the segmentation ability of AX-Unet and the relationship between radiomic features and clinical characteristics on an independent testing set of 64 patients with clear prognoses. The lasso regression analysis was used to screen for variables of interest affecting patients' post-operative recurrence, and the Cox proportional risk model regression analysis was used to screen for risk factors and create a nomogram prediction model. The proposed model achieved an accuracy of 85.9% for pancreas segmentation, meeting the requirements of most clinical applications. Radiomic features were found to be significantly correlated with clinical characteristics such as lymph node metastasis, resectability status, and abnormally elevated serum carbohydrate antigen 19-9 (CA 19-9) levels. Specifically, variance and entropy were associated with the recurrence rate (p < 0.05). The AUC for the nomogram predicting whether the patient recurred after surgery was 0.92 (95% CI: 0.78-0.99) and the C index was 0.62 (95% CI: 0.48-0.78). The AX-Unet pancreas segmentation model shows promise in analysing recurrence risk factors after radical surgery for PDAC. Additionally, our findings suggest that a dynamic nomogram model based on AX-Unet can provide pancreatic oncologists with more accurate prognostic assessments for their patients.
Collapse
Affiliation(s)
- Haixu Ni
- First Clinical Medical College, Lanzhou University, Lanzhou 730000, China
- Department of General Surgery, First Hospital of Lanzhou University, Lanzhou 730000, China
| | - Gonghai Zhou
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
| | - Xinlong Chen
- First Clinical Medical College, Lanzhou University, Lanzhou 730000, China
| | - Jing Ren
- The Reproductive Medicine Hospital of the First Hospital of Lanzhou University, Lanzhou 730000, China
| | - Minqiang Yang
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
| | - Yuhong Zhang
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
| | - Qiyu Zhang
- First Clinical Medical College, Lanzhou University, Lanzhou 730000, China
- Department of General Surgery, First Hospital of Lanzhou University, Lanzhou 730000, China
| | - Lei Zhang
- First Clinical Medical College, Lanzhou University, Lanzhou 730000, China
- Department of General Surgery, First Hospital of Lanzhou University, Lanzhou 730000, China
| | - Chengsheng Mao
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA
| | - Xun Li
- First Clinical Medical College, Lanzhou University, Lanzhou 730000, China
- Department of General Surgery, First Hospital of Lanzhou University, Lanzhou 730000, China
- Key Laboratory of Biotherapy and Regenerative Medicine of Gansu Province, Lanzhou 730000, China
| |
Collapse
|
17
|
Li MM, Huang K, Zitnik M. Graph representation learning in biomedicine and healthcare. Nat Biomed Eng 2022; 6:1353-1369. [PMID: 36316368 PMCID: PMC10699434 DOI: 10.1038/s41551-022-00942-x] [Citation(s) in RCA: 72] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 08/09/2022] [Indexed: 11/11/2022]
Abstract
Networks-or graphs-are universal descriptors of systems of interacting elements. In biomedicine and healthcare, they can represent, for example, molecular interactions, signalling pathways, disease co-morbidities or healthcare systems. In this Perspective, we posit that representation learning can realize principles of network medicine, discuss successes and current limitations of the use of representation learning on graphs in biomedicine and healthcare, and outline algorithmic strategies that leverage the topology of graphs to embed them into compact vectorial spaces. We argue that graph representation learning will keep pushing forward machine learning for biomedicine and healthcare applications, including the identification of genetic variants underlying complex traits, the disentanglement of single-cell behaviours and their effects on health, the assistance of patients in diagnosis and treatment, and the development of safe and effective medicines.
Collapse
Affiliation(s)
- Michelle M Li
- Bioinformatics and Integrative Genomics Program, Harvard Medical School, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Kexin Huang
- Health Data Science Program, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Marinka Zitnik
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA.
- Broad Institute of MIT and Harvard, Cambridge, MA, USA.
- Harvard Data Science Initiative, Cambridge, MA, USA.
| |
Collapse
|
18
|
Li Y, Wu X, Yang P, Jiang G, Luo Y. Machine Learning for Lung Cancer Diagnosis, Treatment, and Prognosis. GENOMICS, PROTEOMICS & BIOINFORMATICS 2022; 20:850-866. [PMID: 36462630 PMCID: PMC10025752 DOI: 10.1016/j.gpb.2022.11.003] [Citation(s) in RCA: 60] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 10/03/2022] [Accepted: 11/17/2022] [Indexed: 12/03/2022]
Abstract
The recent development of imaging and sequencing technologies enables systematic advances in the clinical study of lung cancer. Meanwhile, the human mind is limited in effectively handling and fully utilizing the accumulation of such enormous amounts of data. Machine learning-based approaches play a critical role in integrating and analyzing these large and complex datasets, which have extensively characterized lung cancer through the use of different perspectives from these accrued data. In this review, we provide an overview of machine learning-based approaches that strengthen the varying aspects of lung cancer diagnosis and therapy, including early detection, auxiliary diagnosis, prognosis prediction, and immunotherapy practice. Moreover, we highlight the challenges and opportunities for future applications of machine learning in lung cancer.
Collapse
Affiliation(s)
- Yawei Li
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA
| | - Xin Wu
- Department of Medicine, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Ping Yang
- Department of Quantitative Health Sciences, Mayo Clinic, Rochester, MN 55905 / Scottsdale, AZ 85259, USA
| | - Guoqian Jiang
- Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN 55905, USA
| | - Yuan Luo
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA.
| |
Collapse
|
19
|
Zhang B, Guo X, Lin Q, Wang H, Xu S. Counterfactual inference graph network for disease prediction. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
20
|
Yang M, Zhang Y, Chen H, Wang W, Ni H, Chen X, Li Z, Mao C. AX-Unet: A Deep Learning Framework for Image Segmentation to Assist Pancreatic Tumor Diagnosis. Front Oncol 2022; 12:894970. [PMID: 35719964 PMCID: PMC9202000 DOI: 10.3389/fonc.2022.894970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Accepted: 04/19/2022] [Indexed: 11/13/2022] Open
Abstract
Image segmentation plays an essential role in medical imaging analysis such as tumor boundary extraction. Recently, deep learning techniques have dramatically improved performance for image segmentation. However, an important factor preventing deep neural networks from going further is the information loss during the information propagation process. In this article, we present AX-Unet, a deep learning framework incorporating a modified atrous spatial pyramid pooling module to learn the location information and to extract multi-level contextual information to reduce information loss during downsampling. We also introduce a special group convolution operation on the feature map at each level to achieve information decoupling between channels. In addition, we propose an explicit boundary-aware loss function to tackle the blurry boundary problem. We evaluate our model on two public Pancreas-CT datasets, NIH Pancreas-CT dataset, and the pancreas part in medical segmentation decathlon (MSD) medical dataset. The experimental results validate that our model can outperform the state-of-the-art methods in pancreas CT image segmentation. By comparing the extracted feature output of our model, we find that the pancreatic region of normal people and patients with pancreatic tumors shows significant differences. This could provide a promising and reliable way to assist physicians for the screening of pancreatic tumors.
Collapse
Affiliation(s)
- Minqiang Yang
- School of Information Science Engineering, Lanzhou University, Lanzhou, China
| | - Yuhong Zhang
- School of Information Science Engineering, Lanzhou University, Lanzhou, China
| | - Haoning Chen
- School of Statistics and Data Science, Nankai University, Tianjin, China
| | - Wei Wang
- School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen, China
| | - Haixu Ni
- Department of General Surgery, First Hospital of Lanzhou University, Lanzhou, China
| | - Xinlong Chen
- First Clinical Medical College, Lanzhou University, Lanzhou, China
| | - Zhuoheng Li
- School of Information Science Engineering, Lanzhou University, Lanzhou, China
| | - Chengsheng Mao
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States
| |
Collapse
|
21
|
Mao C, Yao L, Luo Y. MedGCN: Medication recommendation and lab test imputation via graph convolutional networks. J Biomed Inform 2022; 127:104000. [PMID: 35104644 PMCID: PMC8901567 DOI: 10.1016/j.jbi.2022.104000] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 10/31/2021] [Accepted: 01/16/2022] [Indexed: 12/14/2022]
Abstract
Laboratory testing and medication prescription are two of the most important routines in daily clinical practice. Developing an artificial intelligence system that can automatically make lab test imputations and medication recommendations can save costs on potentially redundant lab tests and inform physicians of a more effective prescription. We present an intelligent medical system (named MedGCN) that can automatically recommend the patients' medications based on their incomplete lab tests, and can even accurately estimate the lab values that have not been taken. In our system, we integrate the complex relations between multiple types of medical entities with their inherent features in a heterogeneous graph. Then we model the graph to learn a distributed representation for each entity in the graph based on graph convolutional networks (GCN). By the propagation of graph convolutional networks, the entity representations can incorporate multiple types of medical information that can benefit multiple medical tasks. Moreover, we introduce a cross regularization strategy to reduce overfitting for multi-task training by the interaction between the multiple tasks. In this study, we construct a graph to associate 4 types of medical entities, i.e., patients, encounters, lab tests, and medications, and applied a graph neural network to learn node embeddings for medication recommendation and lab test imputation. we validate our MedGCN model on two real-world datasets: NMEDW and MIMIC-III. The experimental results on both datasets demonstrate that our model can outperform the state-of-the-art in both tasks. We believe that our innovative system can provide a promising and reliable way to assist physicians to make medication prescriptions and to save costs on potentially redundant lab tests.
Collapse
Affiliation(s)
- Chengsheng Mao
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA.
| | - Liang Yao
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| | - Yuan Luo
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA.
| |
Collapse
|