1
|
He R, Sarwal V, Qiu X, Zhuang Y, Zhang L, Liu Y, Chiang J. Generative AI Models in Time-Varying Biomedical Data: Scoping Review. J Med Internet Res 2025; 27:e59792. [PMID: 40063929 PMCID: PMC11933772 DOI: 10.2196/59792] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 08/08/2024] [Accepted: 11/15/2024] [Indexed: 03/28/2025] Open
Abstract
BACKGROUND Trajectory modeling is a long-standing challenge in the application of computational methods to health care. In the age of big data, traditional statistical and machine learning methods do not achieve satisfactory results as they often fail to capture the complex underlying distributions of multimodal health data and long-term dependencies throughout medical histories. Recent advances in generative artificial intelligence (AI) have provided powerful tools to represent complex distributions and patterns with minimal underlying assumptions, with major impact in fields such as finance and environmental sciences, prompting researchers to apply these methods for disease modeling in health care. OBJECTIVE While AI methods have proven powerful, their application in clinical practice remains limited due to their highly complex nature. The proliferation of AI algorithms also poses a significant challenge for nondevelopers to track and incorporate these advances into clinical research and application. In this paper, we introduce basic concepts in generative AI and discuss current algorithms and how they can be applied to health care for practitioners with little background in computer science. METHODS We surveyed peer-reviewed papers on generative AI models with specific applications to time-series health data. Our search included single- and multimodal generative AI models that operated over structured and unstructured data, physiological waveforms, medical imaging, and multi-omics data. We introduce current generative AI methods, review their applications, and discuss their limitations and future directions in each data modality. RESULTS We followed the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines and reviewed 155 articles on generative AI applications to time-series health care data across modalities. Furthermore, we offer a systematic framework for clinicians to easily identify suitable AI methods for their data and task at hand. CONCLUSIONS We reviewed and critiqued existing applications of generative AI to time-series health data with the aim of bridging the gap between computational methods and clinical application. We also identified the shortcomings of existing approaches and highlighted recent advances in generative AI that represent promising directions for health care modeling.
Collapse
Affiliation(s)
- Rosemary He
- Department of Computer Science, University of California, Los Angeles, Los Angeles, CA, United States
- Department of Computational Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Varuni Sarwal
- Department of Computer Science, University of California, Los Angeles, Los Angeles, CA, United States
- Department of Computational Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Xinru Qiu
- Division of Biomedical Sciences, School of Medicine, University of California Riverside, Riverside, CA, United States
| | - Yongwen Zhuang
- Department of Biostatistics, University of Michigan, Ann Arbor, MI, United States
| | - Le Zhang
- Institute for Integrative Genome Biology, University of California Riverside, Riverside, CA, United States
| | - Yue Liu
- Institute for Cellular and Molecular Biology, University of Texas at Austin, Austin, TX, United States
| | - Jeffrey Chiang
- Department of Computational Medicine, University of California, Los Angeles, Los Angeles, CA, United States
- Department of Neurosurgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
2
|
Gou X, Feng A, Feng C, Cheng J, Hong N. Imaging genomics of cancer: a bibliometric analysis and review. Cancer Imaging 2025; 25:24. [PMID: 40038813 DOI: 10.1186/s40644-025-00841-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2024] [Accepted: 02/13/2025] [Indexed: 03/06/2025] Open
Abstract
BACKGROUND Imaging genomics is a burgeoning field that seeks to connections between medical imaging and genomic features. It has been widely applied to explore heterogeneity and predict responsiveness and disease progression in cancer. This review aims to assess current applications and advancements of imaging genomics in cancer. METHODS Literature on imaging genomics in cancer was retrieved and selected from PubMed, Web of Science, and Embase before July 2024. Detail information of articles, such as systems and imaging features, were extracted and analyzed. Citation information was extracted from Web of Science and Scopus. Additionally, a bibliometric analysis of the included studies was conducted using the Bibliometrix R package and VOSviewer. RESULTS A total of 370 articles were included in the study. The annual growth rate of articles on imaging genomics in cancer is 24.88%. China (133) and the USA (107) were the most productive countries. The top 2 keywords plus were "survival" and "classification". The current research mainly focuses on the central nervous system (121) and the genitourinary system (110, including 44 breast cancer articles). Despite different systems utilizing different imaging modalities, more than half of the studies in each system employed radiomics features. CONCLUSIONS Publication databases provide data support for imaging genomics research. The development of artificial intelligence algorithms, especially in feature extraction and model construction, has significantly advanced this field. It is conducive to enhancing the related-models' interpretability. Nonetheless, challenges such as the sample size and the standardization of feature extraction and model construction must overcome. And the research trends revealed in this study will guide the development of imaging genomics in the future and contribute to more accurate cancer diagnosis and treatment in the clinic.
Collapse
Affiliation(s)
- Xinyi Gou
- Department of Radiology, Peking University People's Hospital, Beijing, China
| | - Aobo Feng
- College of Computer and Information, Inner Mongolia Medical University, Inner Mongolia, China
| | - Caizhen Feng
- Department of Radiology, Peking University People's Hospital, Beijing, China
| | - Jin Cheng
- Department of Radiology, Peking University People's Hospital, Beijing, China.
| | - Nan Hong
- Department of Radiology, Peking University People's Hospital, Beijing, China
| |
Collapse
|
3
|
Okada N, Inoue S, Liu C, Mitarai S, Nakagawa S, Matsuzawa Y, Fujimi S, Yamamoto G, Kuroda T. Unified total body CT image with multiple organ specific windowings: validating improved diagnostic accuracy and speed in trauma cases. Sci Rep 2025; 15:5654. [PMID: 39955327 PMCID: PMC11830084 DOI: 10.1038/s41598-024-83346-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2024] [Accepted: 12/13/2024] [Indexed: 02/17/2025] Open
Abstract
Total-body CT scans are useful in saving trauma patients; however, interpreting numerous images with varied window settings slows injury detection. We developed an algorithm for "unified total-body CT image with multiple organ-specific windowings (Uni-CT)", and assessing its impact on physician accuracy and speed in trauma CT interpretation. From November 7, 2008, to June 19, 2020, 40 cases of total-body CT images for blunt trauma with multiple injuries, were collected from the emergency department of Osaka General Medical Center and randomly divided into two groups. In half of the cases, the Uni-CT algorithm using semantic segmentation assigned visibility-friendly window settings to each organ. Four physicians with varying levels of experience interpreted 20 cases using the algorithm and 20 cases in conventional settings. The performance was analyzed based on the accuracy, sensitivity, specificity of the target findings, and diagnosis speed. In the proposal and conventional groups, patients had an average of 2.6 and 2.5 targeting findings, mean ages of 51.8 and 57.7 years, and male proportions of 60% and 45%, respectively. The agreement rate for physicians' diagnoses was κ = 0.70. Average accuracy, sensitivity, and specificity of target findings were 84.8%, 74.3%, 96.9% and 85.5%, 81.2%, 91.5%, respectively, with no significant differences. Diagnostic speed per case averaged 71.9 and 110.4 s in each group (p < 0.05). The Uni-CT algorithm improved the diagnostic speed of total-body CT for trauma, maintaining accuracy comparable to that of conventional methods.
Collapse
Affiliation(s)
- Naoki Okada
- Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto-shi, Kyoto, 606-8501, Japan.
- Osaka General Medical Center, Osaka-shi, Osaka, Japan.
| | | | - Chang Liu
- Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto-shi, Kyoto, 606-8501, Japan
| | - Sho Mitarai
- Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto-shi, Kyoto, 606-8501, Japan
| | | | | | | | - Goshiro Yamamoto
- Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto-shi, Kyoto, 606-8501, Japan
| | - Tomohiro Kuroda
- Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto-shi, Kyoto, 606-8501, Japan
| |
Collapse
|
4
|
Botnari A, Kadar M, Patrascu JM. Considerations on Image Preprocessing Techniques Required by Deep Learning Models. The Case of the Knee MRIs. MAEDICA 2024; 19:526-535. [PMID: 39553362 PMCID: PMC11565144 DOI: 10.26574/maedica.2024.19.3.526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2024]
Abstract
OBJECTIVES This study aims to demonstrate the preprocessing steps for knee MRI images to detect meniscal lesions using deep learning models and highlight their practical implications in diagnosing knee conditions, especially meniscal injuries, often caused by degeneration or trauma. Magnetic resonance imaging (MRI) is key in this field, especially when combined with ligament evaluations, and our research underscores the relevance and applicability of these techniques in real-world scenarios. Importantly, our findings suggest a promising future for the diagnosis of knee conditions. MATERIALS AND METHODS We initially worked with DICOM-format images, the standard for medical imaging, utilizing the Python packages PyDicom and SimpleITK for preprocessing. We also addressed the NIfTI format commonly used in research. Our preprocessing methods, designed with efficiency in mind, encompassed modality-specific adjustments, orientation, spatial resampling, intensity normalization, standardization and conversion to algorithm input format. These steps ensure efficient data handling, accelerate training speeds, and reassure the audience about the effectiveness of our research. RESULTS Our study processed PD-sagittal images from 188 patients to create a test set for training a deep learning segmentation model. We successfully completed all preprocessing steps, including accessing DICOM header information using hexadecimal encoded identifiers and utilizing SimpleITK for efficient handling of both 2D and 3D DICOM data. Resampling was performed for all 188 sets. Additionally, manual segmentation was conducted on 188 MRI scans, focusing on regions of interest (ROIs), such as normal tissue and meniscus tears in both the medial and lateral menisci. This involved contrast adjustment and precise hand-tracing of the structures within the ROIs, demonstrating the effectiveness and potential of our research in diagnosing knee conditions, and offering hope for the future of knee MRI diagnosis. CONCLUSIONS Our study introduces innovative preprocessing methods that have the potential to advance the field. By enhancing researchers' understanding of the importance of preprocessing steps, we anticipate that our techniques will streamline the preparation of standardized formats for deep learning model training and significantly benefit radiologists and orthopedic surgeons. These techniques could reduce time and effort in tasks like meniscal tear segmentation or localization, inspiring hope for more efficient and effective achievements in the field.
Collapse
Affiliation(s)
- A Botnari
- "Victor Babes" University of Medicine and Pharmacy, Timisoara, Romania
| | - M Kadar
- "1 Decembrie 1918" University of Alba Iulia, Alba Iulia, Romania
| | - J M Patrascu
- "Victor Babes" University of Medicine and Pharmacy, Timisoara, Romania
| |
Collapse
|
5
|
Kim HS, Kim S, Kim H, Song SY, Cha Y, Kim JT, Kim JW, Ha YC, Yoo JI. A retrospective evaluation of individual thigh muscle volume disparities based on hip fracture types in followed-up patients: an AI-based segmentation approach using UNETR. PeerJ 2024; 12:e17509. [PMID: 39161969 PMCID: PMC11332390 DOI: 10.7717/peerj.17509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 05/13/2024] [Indexed: 08/21/2024] Open
Abstract
Background Hip fractures are a common and debilitating condition, particularly among older adults. Loss of muscle mass and strength is a common consequence of hip fractures, which further contribute to functional decline and increased disability. Assessing changes in individual thigh muscles volume in follow-up patients can provide valuable insights into the quantitative recovery process and guide rehabilitation interventions. However, accurately measuring anatomical individual thigh muscle volume can be challenging due to various, labor intensive and time-consuming. Materials and Methods This study aimed to evaluate differences in thigh muscle volume in followed-up hip fracture patients computed tomography (CT) scans using an AI based automatic muscle segmentation model. The study included a total of 18 patients at Gyeongsang National University, who had undergone surgical treatment for a hip fracture. We utilized the automatic segmentation algorithm which we have already developed using UNETR (U-net Transformer) architecture, performance dice score = 0.84, relative absolute volume difference 0.019 ± 0.017%. Results The results revealed intertrochanteric fractures result in more significant muscle volume loss (females: -97.4 cm3, males: -178.2 cm3) compared to femoral neck fractures (females: -83 cm3, males: -147.2 cm3). Additionally, the study uncovered substantial disparities in the susceptibility to volume loss among specific thigh muscles, including the Vastus lateralis, Adductor longus and brevis, and Gluteus maximus, particularly in cases of intertrochanteric fractures. Conclusions The use of an automatic muscle segmentation model based on deep learning algorithms enables efficient and accurate analysis of thigh muscle volume differences in followed up hip fracture patients. Our findings emphasize the significant muscle loss tied to sarcopenia, a critical condition among the elderly. Intertrochanteric fractures resulted in greater muscle volume deformities, especially in key muscle groups, across both genders. Notably, while most muscles exhibited volume reduction following hip fractures, the sartorius, vastus and gluteus groups demonstrated more significant disparities in individuals who sustained intertrochanteric fractures. This non-invasive approach provides valuable insights into the extent of muscle atrophy following hip fracture and can inform targeted rehabilitation interventions.
Collapse
Affiliation(s)
- Hyeon Su Kim
- Department of Biomedical Research Institute, Inha University Hospital, Incheon, South Korea
| | - Shinjune Kim
- Department of Biomedical Research Institute, Inha University Hospital, Incheon, South Korea
| | - Hyunbin Kim
- Department of Biomedical Research Institute, Inha University Hospital, Incheon, South Korea
| | - Sang-Youn Song
- Department of Orthopaedic Surgery, Gyeongsang National University Hospital, Jinju, South Korea
| | - Yonghan Cha
- Department of Orthopaedic Surgery, Daejeon Eulji Medical Center, Daejeon, South Korea
| | - Jung-Taek Kim
- Department of Orthopaedic Surgery, Ajou University School of Medicine, Ajou Medical Center, Suwon, Republic of South Korea
| | - Jin-Woo Kim
- Department of Orthopaedic Surgery, Nowon Eulji Medical Center, Seoul, South Korea
| | - Yong-Chan Ha
- Department of Orthopaedic Surgery, Seoul Bumin Hospital, Seoul, Republic of South Korea
| | - Jun-Il Yoo
- Department of Orthopaedic Surgery, Inha University Hospital, Inha University College of Medicine, Incheon, Republic of South Korea
| |
Collapse
|
6
|
Ong EP, Srivastava R, Chen W. Fuzzy-Label Weighted Deep Learning Classification for CT Image Quality Evaluation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40039488 DOI: 10.1109/embc53108.2024.10782438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
This paper proposes a fuzzy-label weighted deep learning-based image classification approach for assessing computed tomography (CT) image quality. More specifically, we want to determine if a captured CT image passes Quality Assessment (QA) with a certain radiation dose. Our contributions here include proposing a fuzzy-label weighting method and introduces the concept of a "fuzzy-label" (to reflect the confidence of the ground-truth annotation by annotator) to aid in the training of deep learning-based image classification. We proposed an ensemble/assimilation method to determine the image quality at entire CT image-level using CT windowing (i.e. clipping of the CT image to 8-bit grayscale with respect to the various window-width (WW) and window-length (WL)), similar to what a human would do manually to assess the CT image quality in the factory setting. Experimental results showed that our proposed fuzzy-label weighted deep learning-based image classification approach (trained using annotations provided by one single annotator) significantly outperforms that of its traditional baseline image classification approach.
Collapse
|
7
|
Kim HS, Kim H, Kim S, Cha Y, Kim JT, Kim JW, Ha YC, Yoo JI. Precise individual muscle segmentation in whole thigh CT scans for sarcopenia assessment using U-net transformer. Sci Rep 2024; 14:3301. [PMID: 38331977 PMCID: PMC10853213 DOI: 10.1038/s41598-024-53707-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 02/04/2024] [Indexed: 02/10/2024] Open
Abstract
The study aims to develop a deep learning based automatic segmentation approach using the UNETR(U-net Transformer) architecture to quantify the volume of individual thigh muscles(27 muscles in 5 groups) for Sarcopenia assessment. By automating the segmentation process, this approach improves the efficiency and accuracy of muscle volume calculation, facilitating a comprehensive understanding of muscle composition and its relationship to Sarcopenia. The study utilized a dataset of 72 whole thigh CT scans from hip fracture patients, annotated by two radiologists. The UNETR model was trained to perform precise voxel-level segmentation and various metrics such as dice score, average symmetric surface distance, volume correlation, relative absolute volume difference and Hausdorff distance were employed to evaluate the model's performance. Additionally, the correlation between Sarcopenia and individual thigh muscle volumes was examined. The proposed model demonstrated superior segmentation performance compared to the baseline model, achieving higher dice scores (DC = 0.84) and lower average symmetric surface distances (ASSD = 1.4191 ± 0.91). The volume correlation between Sarcopenia and individual thigh muscles in the male group. Furthermore, the correlation analysis of grouped thigh muscles also showed negative associations with Sarcopenia in the male participants. This thesis presents a deep learning based automatic segmentation approach for quantifying individual thigh muscle volume in sarcopenia assessment. The results highlights the associations between Sarcopenia and specific individual muscles as well as grouped thigh muscle regions, particularly in males. The proposed method improves the efficiency and accuracy of muscle volume calculation, contributing to a comprehensive evaluation of Sarcopenia. This research enhances our understanding of muscle composition and performance, providing valuable insights for effective interventions in Sarcopenia management.
Collapse
Affiliation(s)
- Hyeon Su Kim
- Department of Biomedical Research Institute, Inha University Hospital, Incheon, South Korea.
| | - Hyunbin Kim
- Department of Biomedical Research Institute, Inha University Hospital, Incheon, South Korea
| | - Shinjune Kim
- Department of Biomedical Research Institute, Inha University Hospital, Incheon, South Korea
| | - Yonghan Cha
- Department of Orthopaedic Surgery, Daejeon Eulji Medical Center, Daejeon, South Korea
| | - Jung-Taek Kim
- Department of Orthopedic Surgery, Ajou University School of Medicine, Suwon, South Korea
| | - Jin-Woo Kim
- Department of Orthopaedic Surgery, Nowon Eulji Medical Center, Seoul, South Korea
| | - Yong-Chan Ha
- Department of Orthopaedic Surgery, Seoul Bumin Hospital, Seoul, South Korea
| | - Jun-Il Yoo
- Department of Orthopedic Surgery, School of Medicine, Inha University Hospital, Incheon, South Korea.
| |
Collapse
|
8
|
Ding H, Chen X, Wang H, Zhang L, Wang F, He L. Identifying immunodeficiency status in children with pulmonary tuberculosis: using radiomics approach based on un-enhanced chest computed tomography. Transl Pediatr 2023; 12:2191-2202. [PMID: 38197102 PMCID: PMC10772833 DOI: 10.21037/tp-23-309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 11/02/2023] [Indexed: 01/11/2024] Open
Abstract
Background Children with primary immunodeficiency diseases (PIDs) are particularly vulnerable to infection of Mycobacterium tuberculosis (Mtb). Chest computed tomography (CT) is an important examination diagnosing pulmonary tuberculosis (PTB), and there are some differences between primary immunocompromised and immunocompetent cases with PTB. Therefore, this study aimed to use the radiomics analysis based on un-enhanced CT for identifying immunodeficiency status in children with PTB. Methods This retrospective study enrolled a total of 173 patients with diagnosis of PTB and available immunodeficiency status. Based on their immunodeficiency status, the patients were divided into PIDs (n=72) and no-PIDs (n=101). The samplings were randomly divided into training and testing groups according to a ratio of 3:1. Regions of interest were obtained by segmenting lung lesions on un-enhanced CT images to extract radiomics features. The optimal radiomics features were identified after dimensionality reduction in the training group, and a logistic regression algorithm was used to establish radiomics model. The model was validated in the training and testing groups. Diagnostic efficiency of the model was evaluated using the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, precision, accuracy, F1 score, calibration curve, and decision curve. Results The radiomics model was constructed using nine optimal features. In the training set, the model achieved an AUC of 0.837, sensitivity of 0.783, specificity of 0.780, and F1 score of 0.749. The cross-validation of the model in the training set showed an AUC of 0.774, sensitivity of 0.834, specificity of 0.720, and F1 score of 0.749. In the test set, the model achieved an AUC of 0.746, sensitivity of 0.722, specificity of 0.692, and F1 score of 0.823. Calibration curves indicated a strong predictive performance by the model, and decision curve analysis demonstrated its clinical utility. Conclusions The CT-based radiomics model demonstrates good discriminative efficacy in identifying the presence of PIDs in children with PTB, and shows promise in accurately identifying the immunodeficiency status in this population.
Collapse
Affiliation(s)
- Hao Ding
- Department of Radiology, Children’s Hospital of Chongqing Medical University, Chongqing, China
- National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, Chongqing, China
| | - Xin Chen
- Department of Radiology, Children’s Hospital of Chongqing Medical University, Chongqing, China
- National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, Chongqing, China
| | - Haoru Wang
- Department of Radiology, Children’s Hospital of Chongqing Medical University, Chongqing, China
- National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, Chongqing, China
| | - Li Zhang
- Department of Radiology, Children’s Hospital of Chongqing Medical University, Chongqing, China
- National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, Chongqing, China
| | - Fang Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ling He
- Department of Radiology, Children’s Hospital of Chongqing Medical University, Chongqing, China
- National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, Chongqing, China
| |
Collapse
|
9
|
Azuri I, Wattad A, Peri-Hanania K, Kashti T, Rosen R, Caspi Y, Istaiti M, Wattad M, Applbaum Y, Zimran A, Revel-Vilk S, C. Eldar Y. A Deep-Learning Approach to Spleen Volume Estimation in Patients with Gaucher Disease. J Clin Med 2023; 12:5361. [PMID: 37629403 PMCID: PMC10455264 DOI: 10.3390/jcm12165361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 08/04/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023] Open
Abstract
The enlargement of the liver and spleen (hepatosplenomegaly) is a common manifestation of Gaucher disease (GD). An accurate estimation of the liver and spleen volumes in patients with GD, using imaging tools such as magnetic resonance imaging (MRI), is crucial for the baseline assessment and monitoring of the response to treatment. A commonly used method in clinical practice to estimate the spleen volume is the employment of a formula that uses the measurements of the craniocaudal length, diameter, and thickness of the spleen in MRI. However, the inaccuracy of this formula is significant, which, in turn, emphasizes the need for a more precise and reliable alternative. To this end, we employed deep-learning techniques, to achieve a more accurate spleen segmentation and, subsequently, calculate the resulting spleen volume with higher accuracy on a testing set cohort of 20 patients with GD. Our results indicate that the mean error obtained using the deep-learning approach to spleen volume estimation is 3.6 ± 2.7%, which is significantly lower than the common formula approach, which resulted in a mean error of 13.9 ± 9.6%. These findings suggest that the integration of deep-learning methods into the clinical routine practice for spleen volume calculation could lead to improved diagnostic and monitoring outcomes.
Collapse
Affiliation(s)
- Ido Azuri
- Bioinformatics Unit, Department of Life Sciences Core Facilities, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Ameer Wattad
- Department of Radiology, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
| | - Keren Peri-Hanania
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Tamar Kashti
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Ronnie Rosen
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Yaron Caspi
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Majdolen Istaiti
- Gaucher Unit, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
| | - Makram Wattad
- Department of Radiology, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
| | - Yaakov Applbaum
- Department of Radiology, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
- Faculty of Medicine, Hebrew University, Jerusalem 9112102, Israel
| | - Ari Zimran
- Gaucher Unit, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
- Faculty of Medicine, Hebrew University, Jerusalem 9112102, Israel
| | - Shoshana Revel-Vilk
- Gaucher Unit, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
- Faculty of Medicine, Hebrew University, Jerusalem 9112102, Israel
| | - Yonina C. Eldar
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| |
Collapse
|
10
|
Raju ASN, Venkatesh K. EnsemDeepCADx: Empowering Colorectal Cancer Diagnosis with Mixed-Dataset Features and Ensemble Fusion CNNs on Evidence-Based CKHK-22 Dataset. Bioengineering (Basel) 2023; 10:738. [PMID: 37370669 PMCID: PMC10295325 DOI: 10.3390/bioengineering10060738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/16/2023] [Accepted: 06/18/2023] [Indexed: 06/29/2023] Open
Abstract
Colorectal cancer is associated with a high mortality rate and significant patient risk. Images obtained during a colonoscopy are used to make a diagnosis, highlighting the importance of timely diagnosis and treatment. Using techniques of deep learning could enhance the diagnostic accuracy of existing systems. Using the most advanced deep learning techniques, a brand-new EnsemDeepCADx system for accurate colorectal cancer diagnosis has been developed. The optimal accuracy is achieved by combining Convolutional Neural Networks (CNNs) with transfer learning via bidirectional long short-term memory (BILSTM) and support vector machines (SVM). Four pre-trained CNN models comprise the ADaDR-22, ADaR-22, and DaRD-22 ensemble CNNs: AlexNet, DarkNet-19, DenseNet-201, and ResNet-50. In each of its stages, the CADx system is thoroughly evaluated. From the CKHK-22 mixed dataset, colour, greyscale, and local binary pattern (LBP) image datasets and features are utilised. In the second stage, the returned features are compared to a new feature fusion dataset using three distinct CNN ensembles. Next, they incorporate ensemble CNNs with SVM-based transfer learning by comparing raw features to feature fusion datasets. In the final stage of transfer learning, BILSTM and SVM are combined with a CNN ensemble. The testing accuracy for the ensemble fusion CNN DarD-22 using BILSTM and SVM on the original, grey, LBP, and feature fusion datasets was optimal (95.96%, 88.79%, 73.54%, and 97.89%). Comparing the outputs of all four feature datasets with those of the three ensemble CNNs at each stage enables the EnsemDeepCADx system to attain its highest level of accuracy.
Collapse
Affiliation(s)
- Akella Subrahmanya Narasimha Raju
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, SRM Nagar, Chennai 603203, India;
| | | |
Collapse
|
11
|
Patsanis A, Sunoqrot MRS, Bathen TF, Elschot M. CROPro: a tool for automated cropping of prostate magnetic resonance images. J Med Imaging (Bellingham) 2023; 10:024004. [PMID: 36895761 PMCID: PMC9990132 DOI: 10.1117/1.jmi.10.2.024004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 02/09/2023] [Indexed: 03/09/2023] Open
Abstract
Purpose To bypass manual data preprocessing and optimize deep learning performance, we developed and evaluated CROPro, a tool to standardize automated cropping of prostate magnetic resonance (MR) images. Approach CROPro enables automatic cropping of MR images regardless of patient health status, image size, prostate volume, or pixel spacing. CROPro can crop foreground pixels from a region of interest (e.g., prostate) with different image sizes, pixel spacing, and sampling strategies. Performance was evaluated in the context of clinically significant prostate cancer (csPCa) classification. Transfer learning was used to train five convolutional neural network (CNN) and five vision transformer (ViT) models using different combinations of cropped image sizes ( 64 × 64 , 128 × 128 , and 256 × 256 pixels2), pixel spacing ( 0.2 × 0.2 , 0.3 × 0.3 , 0.4 × 0.4 , and 0.5 × 0.5 mm 2 ), and sampling strategies (center, random, and stride cropping) over the prostate. T2-weighted MR images ( N = 1475 ) from the online available PI-CAI challenge were used to train ( N = 1033 ), validate ( N = 221 ), and test ( N = 221 ) all models. Results Among CNNs, SqueezeNet with stride cropping (image size: 128 × 128 , pixel spacing: 0.2 × 0.2 mm 2 ) achieved the best classification performance ( 0.678 ± 0.006 ). Among ViTs, ViT-H/14 with random cropping (image size: 64 × 64 and pixel spacing: 0.5 × 0.5 mm 2 ) achieved the best performance ( 0.756 ± 0.009 ). Model performance depended on the cropped area, with optimal size generally larger with center cropping ( ∼ 40 cm 2 ) than random/stride cropping ( ∼ 10 cm 2 ). Conclusion We found that csPCa classification performance of CNNs and ViTs depends on the cropping settings. We demonstrated that CROPro is well suited to optimize these settings in a standardized manner, which could improve the overall performance of deep learning models.
Collapse
Affiliation(s)
- Alexandros Patsanis
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Trondheim, Norway
| | - Mohammed R. S. Sunoqrot
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Trondheim, Norway
- St. Olavs Hospital, Trondheim University Hospital, Department of Radiology and Nuclear Medicine, Trondheim, Norway
| | - Tone F. Bathen
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Trondheim, Norway
- St. Olavs Hospital, Trondheim University Hospital, Department of Radiology and Nuclear Medicine, Trondheim, Norway
| | - Mattijs Elschot
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Trondheim, Norway
- St. Olavs Hospital, Trondheim University Hospital, Department of Radiology and Nuclear Medicine, Trondheim, Norway
| |
Collapse
|
12
|
Ni M, Zhao Y, Wen X, Lang N, Wang Q, Chen W, Zeng X, Yuan H. Deep learning-assisted classification of calcaneofibular ligament injuries in the ankle joint. Quant Imaging Med Surg 2023; 13:80-93. [PMID: 36620152 PMCID: PMC9816759 DOI: 10.21037/qims-22-470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 09/07/2022] [Indexed: 11/07/2022]
Abstract
Background The classification of calcaneofibular ligament (CFL) injuries on magnetic resonance imaging (MRI) is time-consuming and subject to substantial interreader variability. This study explores the feasibility of classifying CFL injuries using deep learning methods by comparing them with the classifications of musculoskeletal (MSK) radiologists and further examines image cropping screening and calibration methods. Methods The imaging data of 1,074 patients who underwent ankle arthroscopy and MRI examinations in our hospital were retrospectively analyzed. According to the arthroscopic findings, patients were divided into normal (class 0, n=475); degeneration, strain, and partial tear (class 1, n=217); and complete tear (class 2, n=382) groups. All patients were divided into training, validation, and test sets at a ratio of 8:1:1. After preprocessing, the images were cropped using Mask region-based convolutional neural network (R-CNN), followed by the application of an attention algorithm for image screening and calibration and the implementation of LeNet-5 for CFL injury classification. The diagnostic effects of the axial, coronal, and combined models were compared, and the best method was selected for outgroup validation. The diagnostic results of the models in the intragroup and outgroup test sets were compared with those results of 4 MSK radiologists of different seniorities. Results The mean average precision (mAP) of the Mask R-CNN using the attention algorithm for the left and right image cropping of axial and coronal sequences was 0.90-0.96. The accuracy of LeNet-5 for classifying classes 0-2 was 0.92, 0.93, and 0.92, respectively, for the axial sequences and 0.89, 0.92, and 0.90, respectively, for the coronal sequences. After sequence combination, the classification accuracy for classes 0-2 was 0.95, 0.97, and 0.96, respectively. The mean accuracies of the 4 MSK radiologists in classifying the intragroup test set as classes 0-2 were 0.94, 0.91, 0.86, and 0.85, all of which were significantly different from the model. The mean accuracies of the MSK radiologists in classifying the outgroup test set as classes 0-2 were 0.92, 0.91, 0.87, and 0.85, with the 2 senior MSK radiologists demonstrating similar diagnostic performance to the model and the junior MSK radiologists demonstrating worse accuracy. Conclusions Deep learning can be used to classify CFL injuries at similar levels to those of MSK radiologists. Adding an attention algorithm after cropping is helpful for accurately cropping CFL images.
Collapse
Affiliation(s)
- Ming Ni
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Yuqing Zhao
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Xiaoyi Wen
- Institute of Statistics and Big Data, Renmin University of China, Beijing, China
| | - Ning Lang
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Qizheng Wang
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Wen Chen
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Xiangzhu Zeng
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, Beijing, China
| |
Collapse
|
13
|
Rouzrokh P, Khosravi B, Faghani S, Moassefi M, Vera Garcia DV, Singh Y, Zhang K, Conte GM, Erickson BJ. Mitigating Bias in Radiology Machine Learning: 1. Data Handling. Radiol Artif Intell 2022; 4:e210290. [PMID: 36204544 PMCID: PMC9533091 DOI: 10.1148/ryai.210290] [Citation(s) in RCA: 68] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 07/19/2022] [Accepted: 07/20/2022] [Indexed: 05/08/2023]
Abstract
Minimizing bias is critical to adoption and implementation of machine learning (ML) in clinical practice. Systematic mathematical biases produce consistent and reproducible differences between the observed and expected performance of ML systems, resulting in suboptimal performance. Such biases can be traced back to various phases of ML development: data handling, model development, and performance evaluation. This report presents 12 suboptimal practices during data handling of an ML study, explains how those practices can lead to biases, and describes what may be done to mitigate them. Authors employ an arbitrary and simplified framework that splits ML data handling into four steps: data collection, data investigation, data splitting, and feature engineering. Examples from the available research literature are provided. A Google Colaboratory Jupyter notebook includes code examples to demonstrate the suboptimal practices and steps to prevent them. Keywords: Data Handling, Bias, Machine Learning, Deep Learning, Convolutional Neural Network (CNN), Computer-aided Diagnosis (CAD) © RSNA, 2022.
Collapse
Affiliation(s)
- Pouria Rouzrokh
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Bardia Khosravi
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Shahriar Faghani
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Mana Moassefi
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Diana V. Vera Garcia
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Yashbir Singh
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Kuan Zhang
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Gian Marco Conte
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Bradley J. Erickson
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| |
Collapse
|
14
|
Intelligent tuberculosis activity assessment system based on an ensemble of neural networks. Comput Biol Med 2022; 147:105800. [PMID: 35809407 DOI: 10.1016/j.compbiomed.2022.105800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 05/11/2022] [Accepted: 06/26/2022] [Indexed: 11/20/2022]
Abstract
This article proposes a novel approach to assess the degree of activity of pulmonary tuberculosis by active tuberculoma foci. It includes the development of a new method for processing lung CT images using an ensemble of deep convolutional neural networks using such special algorithms: an optimized algorithm for preliminary segmentation and selection of informative scans, a new algorithm for refining segmented masks to improve the final accuracy, an efficient fuzzy inference system for more weighted activity assessment. The approach also includes the use of medical classification of disease activity based on densitometric measures of tuberculomas. The selection and markup of the training sample images were performed manually by qualified pulmonologists from a base of approximately 9,000 CT lung scans of patients who had been enrolled in the dispensary for 15 years. The first basic step of the proposed approach is the developed algorithm for preprocessing CT lung scans. It consists in segmentation of intrapulmonary regions, which contain vessels, bronchi, lung walls to detect complex cases of ingrown tuberculomas. To minimize computational cost, the proposed approach includes a new method for selecting informative lung scans, i.e., those that potentially contain tuberculomas. The main processing step is binary segmentation of tuberculomas, which is proposed to be performed optimally by a certain ensemble of neural networks. Optimization of the ensemble size and its composition is achieved by using an algorithm for calculating individual contributions. A modification of this algorithm using new effective heuristic metrics has been proposed which improves the performance of the algorithm for this problem. A special algorithm was developed for post-processing of tuberculoma masks obtained during the segmentation step. The goal of this step is to refine the calculated mask for the physical placement of the tuberculoma. The algorithm consists in cleaning the mask from noisy formations on the scan, as well as expanding the mask area to maximize the capture of the tuberculoma location area. A simplified fuzzy inference system was developed to provide a more accurate final calculation of the degree of disease activity, which reflects data from current medical studies. The accuracy of the system was also tested on a test sample of independent patients, showing more than 96% correct calculations of disease activity, confirming the effectiveness and feasibility of introducing the system into clinical practice.
Collapse
|
15
|
Keek SA, Beuque M, Primakov S, Woodruff HC, Chatterjee A, van Timmeren JE, Vallières M, Hendriks LEL, Kraft J, Andratschke N, Braunstein SE, Morin O, Lambin P. Predicting Adverse Radiation Effects in Brain Tumors After Stereotactic Radiotherapy With Deep Learning and Handcrafted Radiomics. Front Oncol 2022; 12:920393. [PMID: 35912214 PMCID: PMC9326101 DOI: 10.3389/fonc.2022.920393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 06/13/2022] [Indexed: 11/13/2022] Open
Abstract
IntroductionThere is a cumulative risk of 20–40% of developing brain metastases (BM) in solid cancers. Stereotactic radiotherapy (SRT) enables the application of high focal doses of radiation to a volume and is often used for BM treatment. However, SRT can cause adverse radiation effects (ARE), such as radiation necrosis, which sometimes cause irreversible damage to the brain. It is therefore of clinical interest to identify patients at a high risk of developing ARE. We hypothesized that models trained with radiomics features, deep learning (DL) features, and patient characteristics or their combination can predict ARE risk in patients with BM before SRT.MethodsGadolinium-enhanced T1-weighted MRIs and characteristics from patients treated with SRT for BM were collected for a training and testing cohort (N = 1,404) and a validation cohort (N = 237) from a separate institute. From each lesion in the training set, radiomics features were extracted and used to train an extreme gradient boosting (XGBoost) model. A DL model was trained on the same cohort to make a separate prediction and to extract the last layer of features. Different models using XGBoost were built using only radiomics features, DL features, and patient characteristics or a combination of them. Evaluation was performed using the area under the curve (AUC) of the receiver operating characteristic curve on the external dataset. Predictions for individual lesions and per patient developing ARE were investigated.ResultsThe best-performing XGBoost model on a lesion level was trained on a combination of radiomics features and DL features (AUC of 0.71 and recall of 0.80). On a patient level, a combination of radiomics features, DL features, and patient characteristics obtained the best performance (AUC of 0.72 and recall of 0.84). The DL model achieved an AUC of 0.64 and recall of 0.85 per lesion and an AUC of 0.70 and recall of 0.60 per patient.ConclusionMachine learning models built on radiomics features and DL features extracted from BM combined with patient characteristics show potential to predict ARE at the patient and lesion levels. These models could be used in clinical decision making, informing patients on their risk of ARE and allowing physicians to opt for different therapies.
Collapse
Affiliation(s)
- Simon A. Keek
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, Netherlands
| | - Manon Beuque
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, Netherlands
| | - Sergey Primakov
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, Netherlands
| | - Henry C. Woodruff
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, Netherlands
- Department of Radiology and Nuclear Medicine, GROW – School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, Netherlands
| | - Avishek Chatterjee
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, Netherlands
| | - Janita E. van Timmeren
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
| | - Martin Vallières
- Medical Physics Unit, Department of Oncology, Faculty of Medicine, McGill University, Montréal, QC, Canada
- Department of Computer Science, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Lizza E. L. Hendriks
- Department of Pulmonary Diseases, GROW – School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, Netherlands
| | - Johannes Kraft
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
- Department of Radiation Oncology, University Hospital Würzburg, Würzburg, Germany
| | - Nicolaus Andratschke
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
| | - Steve E. Braunstein
- Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, United States
| | - Olivier Morin
- Department of Radiation Oncology, University of California San Francisco, San Francisco, CA, United States
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, Netherlands
- Department of Radiology and Nuclear Medicine, GROW – School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, Netherlands
- *Correspondence: Philippe Lambin,
| |
Collapse
|
16
|
Venugopal V, Joseph J, Vipin Das M, Kumar Nath M. An EfficientNet-based modified sigmoid transform for enhancing dermatological macro-images of melanoma and nevi skin lesions. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 222:106935. [PMID: 35724474 DOI: 10.1016/j.cmpb.2022.106935] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 04/28/2022] [Accepted: 06/03/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE During the initial stages, skin lesions may not have sufficient intensity difference or contrast from the background region on dermatological macro-images. The lack of proper light exposure at the time of capturing the image also reduces the contrast. Low contrast between lesion and background regions adversely impacts segmentation. Enhancement techniques for improving the contrast between lesion and background skin on dermatological macro-images are limited in the literature. An EfficientNet-based modified sigmoid transform for enhancing the contrast on dermatological macro-images is proposed to address this issue. METHODS A modified sigmoid transform is applied in the HSV color space. The crossover point in the modified sigmoid transform that divides the macro-image into lesion and background is predicted using a modified EfficientNet regressor to exclude manual intervention and subjectivity. The Modified EfficientNet regressor is constructed by replacing the classifier layer in the conventional EfficientNet with a regression layer. Transfer learning is employed to reduce the training time and size of the dataset required to train the modified EfficientNet regressor. For training the modified EfficientNet regressor, a set of value components extracted from the HSV color space representation of the macro-images in the training dataset is fed as input. The corresponding set of ideal crossover points at which the values of Dice similarity coefficient (DSC) between the ground-truth images and the segmented output images obtained from Otsu's thresholding are maximum, is defined as the target. RESULTS On images enhanced with the proposed framework, the DSC of segmented results obtained by Otsu's thresholding increased from 0.68 ± 0.34 to 0.81 ± 0.17. CONCLUSIONS The proposed algorithm could consistently improve the contrast between lesion and background on a comprehensive set of test images, justifying its applications in automated analysis of dermatological macro-images.
Collapse
Affiliation(s)
- Vipin Venugopal
- Department of Electronics and Communication Engineering, National Institute of Technology Puducherry, Karaikal, Puducherry 609609, India.
| | - Justin Joseph
- School of Bioengineering, VIT Bhopal University, Sehore, Madhya Pradesh 466114, India.
| | - M Vipin Das
- Department of Dermatology, Kerala Health Services, Trivandrum, Kerala 695035, India.
| | - Malaya Kumar Nath
- Department of Electronics and Communication Engineering, National Institute of Technology Puducherry, Karaikal, Puducherry 609609, India.
| |
Collapse
|
17
|
Subramaniam P, Kossen T, Ritter K, Hennemuth A, Hildebrand K, Hilbert A, Sobesky J, Livne M, Galinovic I, Khalil AA, Fiebach JB, Frey D, Madai VI. Generating 3D TOF-MRA volumes and segmentation labels using generative adversarial networks. Med Image Anal 2022; 78:102396. [DOI: 10.1016/j.media.2022.102396] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 01/28/2022] [Accepted: 02/17/2022] [Indexed: 02/01/2023]
|
18
|
Artificial intelligence in gastrointestinal and hepatic imaging: past, present and future scopes. Clin Imaging 2022; 87:43-53. [DOI: 10.1016/j.clinimag.2022.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Revised: 03/09/2022] [Accepted: 04/11/2022] [Indexed: 11/19/2022]
|
19
|
Turkbey B, Haider MA. Deep learning-based artificial intelligence applications in prostate MRI: brief summary. Br J Radiol 2022; 95:20210563. [PMID: 34860562 PMCID: PMC8978238 DOI: 10.1259/bjr.20210563] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
Prostate cancer (PCa) is the most common cancer type in males in the Western World. MRI has an established role in diagnosis of PCa through guiding biopsies. Due to multistep complex nature of the MRI-guided PCa diagnosis pathway, diagnostic performance has a big variation. Developing artificial intelligence (AI) models using machine learning, particularly deep learning, has an expanding role in radiology. Specifically, for prostate MRI, several AI approaches have been defined in the literature for prostate segmentation, lesion detection and classification with the aim of improving diagnostic performance and interobserver agreement. In this review article, we summarize the use of radiology applications of AI in prostate MRI.
Collapse
Affiliation(s)
- Baris Turkbey
- Molecular Imaging Branch, NCI, NIH, Bethesda, MD, USA
| | | |
Collapse
|
20
|
Toğaçar M. Detection of retinopathy disease using morphological gradient and segmentation approaches in fundus images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 214:106579. [PMID: 34896689 DOI: 10.1016/j.cmpb.2021.106579] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Revised: 12/01/2021] [Accepted: 12/03/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Diabetes-related cases can cause glaucoma, cataracts, optic neuritis, paralysis of the eye muscles, or various retinal damages over time. Diabetic retinopathy is the most common form of blindness that occurs with diabetes. Diabetic retinopathy is a disease that occurs when the blood vessels in the retina of the eye become damaged, leading to loss of vision in advanced stages. This disease can occur in any diabetic patient, and the most important factor in treating the disease is early diagnosis. Nowadays, deep learning models and machine learning methods, which are open to technological developments, are already used in early diagnosis systems. In this study, two publicly available datasets were used. The datasets consist of five types according to the severity of diabetic retinopathy. The objectives of the proposed approach in diabetic retinopathy detection are to positively contribute to the performance of CNN models by processing fundus images through preprocessing steps (morphological gradient and segmentation approaches). The other goal is to detect efficient sets from type-based activation sets obtained from CNN models using Atom Search Optimization method and increase the classification success. METHODS The proposed approach consists of three steps. In the first step, the Morphological Gradient method is used to prevent parasitism in each image, and the ocular vessels in fundus images are extracted using the segmentation method. In the second step, the datasets are trained with transfer learning models and the activations for each class type in the last fully connected layers of these models are extracted. In the last step, the Atom Search optimization method is used, and the most dominant activation class is selected from the extracted activations on a class basis. RESULTS When classified by the severity of diabetic retinopathy, an overall accuracy of 99.59% was achieved for dataset #1 and 99.81% for dataset #2. CONCLUSIONS In this study, it was found that the overall accuracy achieved with the proposed approach increased. To achieve this increase, the application of preprocessing steps and the selection of the dominant activation sets from the deep learning models were implemented using the Atom Search optimization method.
Collapse
Affiliation(s)
- Mesut Toğaçar
- Computer Technologies Department, Technical Sciences Vocational School, Fırat University, Elazığ, Turkey.
| |
Collapse
|
21
|
A General Preprocessing Pipeline for Deep Learning on Radiology Images: A COVID-19 Case Study. PROGRESS IN ARTIFICIAL INTELLIGENCE 2022. [DOI: 10.1007/978-3-031-16474-3_20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
22
|
Sharma S, Kaushal A, Patel S, Kumar V, Prakash M, Mandeep D. Methods to address metal artifacts in post-processed CT images - A do-it-yourself guide for orthopedic surgeons. J Clin Orthop Trauma 2021; 20:101493. [PMID: 34277344 PMCID: PMC8267498 DOI: 10.1016/j.jcot.2021.101493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 06/29/2021] [Accepted: 06/30/2021] [Indexed: 11/28/2022] Open
Abstract
Computed tomography (CT) scans are often used for postoperative imaging in orthopedics. In the presence of metallic hardware, artifacts are generated, which can hamper visualization of the CT images, and also render the study ineffective for 3-D printing. Various solutions are available to minimize metal artifacts, and radiologists can employ these before or after processing the CT study. However, the orthopedic surgeon may be faced with situations where the metal artifacts were not addressed. To counter such problems, we present three do-it-yourself (DIY) techniques that can be used to manage metal artifacts.
Collapse
Affiliation(s)
| | | | - Sandeep Patel
- Corresponding author. Department of Orthopedics, PGIMER, Chandigarh, Pin- 160012, India.
| | | | | | | |
Collapse
|
23
|
Multi-view iterative random walker for automated salvageable tissue delineation in ischemic stroke from multi-sequence MRI. J Neurosci Methods 2021; 360:109260. [PMID: 34146591 DOI: 10.1016/j.jneumeth.2021.109260] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 05/19/2021] [Accepted: 06/13/2021] [Indexed: 12/13/2022]
Abstract
BACKGROUND AND OBJECTIVE Non-invasive and robust identification of salvageable tissue (penumbra) is crucial for interventional stroke therapy. Besides identifying stroke injury as a whole, the ability to automatically differentiate core and penumbra tissues, using both diffusion and perfusion magnetic resonance imaging (MRI) sequences is essential for ischemic stroke treatment. METHOD A fully automated and novel one-shot multi-view iterative random walker (MIRW) method with an automated injury seed point detection is developed for lesion delineation. MIRW utilizes the heirarchical decomposition of multi-sequence MRI physical properties of the underlying tissue within the lesion to maximize the inter-class variations of the volumetric histogram to estimate the probable seed points. These estimates are further utilized to conglomerate the lesion estimations iteratively from axial, coronal and sagittal MRI volumes for a computationally efficient segmentation and quantification of salvageable and necrotic tissues from multi-sequence MRI. RESULTS Comprehensive experimental analysis of MIRW is performed on three challenging adult(sub-)acute ischemic stroke datasets using performance measures like precision, sensitivity, specificity and Dice similarity score (DSC), which are computed with respect to the manual ground-truth. COMPARISON WITH EXISTING METHODS MIRW method resulted in a high DSC of 83.5% in a very less computational time of 98.23 s/volume, which is a significant improvement on the ISLES benchmark dataset for penumbra detection, compared to the state-of-the-art techniques. CONCLUSION Quantitative measures demonstrate the promising potential of MIRW for computational analysis of adult stroke and quantifying penumbra in stroke patients which is essential for selecting the good candidates for recanalization.
Collapse
|