1
|
Zhao X, Zang D, Wang S, Shen Z, Xuan K, Wei Z, Wang Z, Zheng R, Wu X, Li Z, Wang Q, Qi Z, Zhang L. sTBI-GAN: An adversarial learning approach for data synthesis on traumatic brain segmentation. Comput Med Imaging Graph 2024; 112:102325. [PMID: 38228021 DOI: 10.1016/j.compmedimag.2024.102325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 11/18/2023] [Accepted: 12/12/2023] [Indexed: 01/18/2024]
Abstract
Automatic brain segmentation of magnetic resonance images (MRIs) from severe traumatic brain injury (sTBI) patients is critical for brain abnormality assessments and brain network analysis. Construction of sTBI brain segmentation model requires manually annotated MR scans of sTBI patients, which becomes a challenging problem as it is quite impractical to implement sufficient annotations for sTBI images with large deformations and lesion erosion. Data augmentation techniques can be applied to alleviate the issue of limited training samples. However, conventional data augmentation strategies such as spatial and intensity transformation are unable to synthesize the deformation and lesions in traumatic brains, which limits the performance of the subsequent segmentation task. To address these issues, we propose a novel medical image inpainting model named sTBI-GAN to synthesize labeled sTBI MR scans by adversarial inpainting. The main strength of our sTBI-GAN method is that it can generate sTBI images and corresponding labels simultaneously, which has not been achieved in previous inpainting methods for medical images. We first generate the inpainted image under the guidance of edge information following a coarse-to-fine manner, and then the synthesized MR image is used as the prior for label inpainting. Furthermore, we introduce a registration-based template augmentation pipeline to increase the diversity of the synthesized image pairs and enhance the capacity of data augmentation. Experimental results show that the proposed sTBI-GAN method can synthesize high-quality labeled sTBI images, which greatly improves the 2D and 3D traumatic brain segmentation performance compared with the alternatives. Code is available at .
Collapse
Affiliation(s)
- Xiangyu Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Di Zang
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China; National Center for Neurological Disorders, Shanghai, China; Shanghai Key Laboratory of Brain Function and Restoration and Neural Regeneration, Shanghai, China; State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, School of Basic Medical Sciences and Institutes of Brain Science, Fudan University, China
| | - Sheng Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zhenrong Shen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Kai Xuan
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zeyu Wei
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China; National Center for Neurological Disorders, Shanghai, China; Shanghai Key Laboratory of Brain Function and Restoration and Neural Regeneration, Shanghai, China; State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, School of Basic Medical Sciences and Institutes of Brain Science, Fudan University, China
| | - Zhe Wang
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China; National Center for Neurological Disorders, Shanghai, China; Shanghai Key Laboratory of Brain Function and Restoration and Neural Regeneration, Shanghai, China; State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, School of Basic Medical Sciences and Institutes of Brain Science, Fudan University, China
| | - Ruizhe Zheng
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China; National Center for Neurological Disorders, Shanghai, China; Shanghai Key Laboratory of Brain Function and Restoration and Neural Regeneration, Shanghai, China; State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, School of Basic Medical Sciences and Institutes of Brain Science, Fudan University, China
| | - Xuehai Wu
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China; National Center for Neurological Disorders, Shanghai, China; Shanghai Key Laboratory of Brain Function and Restoration and Neural Regeneration, Shanghai, China; State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, School of Basic Medical Sciences and Institutes of Brain Science, Fudan University, China
| | - Zheren Li
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Qian Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Zengxin Qi
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China; National Center for Neurological Disorders, Shanghai, China; Shanghai Key Laboratory of Brain Function and Restoration and Neural Regeneration, Shanghai, China; State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, School of Basic Medical Sciences and Institutes of Brain Science, Fudan University, China.
| | - Lichi Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
2
|
Ilyas T, Ahmad K, Arsa DMS, Jeong YC, Kim H. Enhancing medical image analysis with unsupervised domain adaptation approach across microscopes and magnifications. Comput Biol Med 2024; 170:108055. [PMID: 38295480 DOI: 10.1016/j.compbiomed.2024.108055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 01/05/2024] [Accepted: 01/26/2024] [Indexed: 02/02/2024]
Abstract
In the domain of medical image analysis, deep learning models are heralding a revolution, especially in detecting complex and nuanced features characteristic of diseases like tumors and cancers. However, the robustness and adaptability of these models across varied imaging conditions and magnifications remain a formidable challenge. This paper introduces the Fourier Adaptive Recognition System (FARS), a pioneering model primarily engineered to address adaptability in malarial parasite recognition. Yet, the foundational principles guiding FARS lend themselves seamlessly to broader applications, including tumor and cancer diagnostics. FARS capitalizes on the untapped potential of transitioning from bounding box labels to richer semantic segmentation labels, enabling a more refined examination of microscopy slides. With the integration of adversarial training and the Color Domain Aware Fourier Domain Adaptation (F2DA), the model ensures consistent feature extraction across diverse microscopy configurations. The further inclusion of category-dependent context attention amplifies FARS's cross-domain versatility. Evidenced by a substantial elevation in cross-magnification performance from 31.3% mAP to 55.19% mAP and a 15.68% boost in cross-domain adaptability, FARS positions itself as a significant advancement in malarial parasite recognition. Furthermore, the core methodologies of FARS can serve as a blueprint for enhancing precision in other realms of medical image analysis, especially in the complex terrains of tumor and cancer imaging. The code is available at; https://github.com/Mr-TalhaIlyas/FARS.
Collapse
Affiliation(s)
- Talha Ilyas
- Division of Electronics and Information Engineering, Jeonbuk National University, Jeonju, 54896, Republic of Korea; Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonju, 54896, Republic of Korea.
| | - Khubaib Ahmad
- Division of Electronics and Information Engineering, Jeonbuk National University, Jeonju, 54896, Republic of Korea; Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonju, 54896, Republic of Korea
| | - Dewa Made Sri Arsa
- Division of Electronics and Information Engineering, Jeonbuk National University, Jeonju, 54896, Republic of Korea; Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonju, 54896, Republic of Korea; Department of Information Technology, Universitas Udayana, Bali, 80361, Indonesia
| | - Yong Chae Jeong
- Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonju, 54896, Republic of Korea; Division of Electronics Engineering, Jeonbuk National University, Jeonju, 54896, Republic of Korea
| | - Hyongsuk Kim
- Division of Electronics and Information Engineering, Jeonbuk National University, Jeonju, 54896, Republic of Korea; Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonju, 54896, Republic of Korea.
| |
Collapse
|
3
|
Cui YM, Wang HL, Cao R, Bai H, Sun D, Feng JX, Lu XF. The Segmentation of Multiple Types of Uterine Lesions in Magnetic Resonance Images Using a Sequential Deep Learning Method with Image-Level Annotations. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:374-385. [PMID: 38343259 DOI: 10.1007/s10278-023-00931-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 10/25/2023] [Accepted: 10/27/2023] [Indexed: 03/02/2024]
Abstract
Fully supervised medical image segmentation methods use pixel-level labels to achieve good results, but obtaining such large-scale, high-quality labels is cumbersome and time consuming. This study aimed to develop a weakly supervised model that only used image-level labels to achieve automatic segmentation of four types of uterine lesions and three types of normal tissues on magnetic resonance images. The MRI data of the patients were retrospectively collected from the database of our institution, and the T2-weighted sequence images were selected and only image-level annotations were made. The proposed two-stage model can be divided into four sequential parts: the pixel correlation module, the class re-activation map module, the inter-pixel relation network module, and the Deeplab v3 + module. The dice similarity coefficient (DSC), the Hausdorff distance (HD), and the average symmetric surface distance (ASSD) were employed to evaluate the performance of the model. The original dataset consisted of 85,730 images from 316 patients with four different types of lesions (i.e., endometrial cancer, uterine leiomyoma, endometrial polyps, and atypical hyperplasia of endometrium). A total number of 196, 57, and 63 patients were randomly selected for model training, validation, and testing. After being trained from scratch, the proposed model showed a good segmentation performance with an average DSC of 83.5%, HD of 29.3 mm, and ASSD of 8.83 mm, respectively. As far as the weakly supervised methods using only image-level labels are concerned, the performance of the proposed model is equivalent to the state-of-the-art weakly supervised methods.
Collapse
Affiliation(s)
- Yu-Meng Cui
- Department of Gynecology, Dalian Women and Children's Medical Group, Dalian, 116033, China
| | - Hua-Li Wang
- Department of Gynecology, Dalian Women and Children's Medical Group, Dalian, 116033, China
| | - Rui Cao
- Department of Gynecology, Dalian Women and Children's Medical Group, Dalian, 116033, China
| | - Hong Bai
- Department of Gynecology, Dalian Women and Children's Medical Group, Dalian, 116033, China
| | - Dan Sun
- Department of Gynecology, Dalian Women and Children's Medical Group, Dalian, 116033, China
| | - Jiu-Xiang Feng
- Department of Gynecology, Dalian Women and Children's Medical Group, Dalian, 116033, China
| | - Xue-Feng Lu
- School of Food Science and Engineering, Dalian Ocean University, Dalian, 116023, China.
| |
Collapse
|
4
|
Malik M, Chong B, Fernandez J, Shim V, Kasabov NK, Wang A. Stroke Lesion Segmentation and Deep Learning: A Comprehensive Review. Bioengineering (Basel) 2024; 11:86. [PMID: 38247963 PMCID: PMC10813717 DOI: 10.3390/bioengineering11010086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 01/05/2024] [Accepted: 01/15/2024] [Indexed: 01/23/2024] Open
Abstract
Stroke is a medical condition that affects around 15 million people annually. Patients and their families can face severe financial and emotional challenges as it can cause motor, speech, cognitive, and emotional impairments. Stroke lesion segmentation identifies the stroke lesion visually while providing useful anatomical information. Though different computer-aided software are available for manual segmentation, state-of-the-art deep learning makes the job much easier. This review paper explores the different deep-learning-based lesion segmentation models and the impact of different pre-processing techniques on their performance. It aims to provide a comprehensive overview of the state-of-the-art models and aims to guide future research and contribute to the development of more robust and effective stroke lesion segmentation models.
Collapse
Affiliation(s)
- Mishaim Malik
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
| | - Benjamin Chong
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Faculty of Medical and Health Sciences, The University of Auckland, Auckland 1010, New Zealand
- Centre for Brain Research, The University of Auckland, Auckland 1010, New Zealand
| | - Justin Fernandez
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Centre for Brain Research, The University of Auckland, Auckland 1010, New Zealand
- Mātai Medical Research Institute, Gisborne 4010, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Mātai Medical Research Institute, Gisborne 4010, New Zealand
| | - Nikola Kirilov Kasabov
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Knowledge Engineering and Discovery Research Innovation, School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland 1010, New Zealand
- Institute for Information and Communication Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
- Knowledge Engineering Consulting Ltd., Auckland 1071, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Faculty of Medical and Health Sciences, The University of Auckland, Auckland 1010, New Zealand
- Centre for Brain Research, The University of Auckland, Auckland 1010, New Zealand
- Mātai Medical Research Institute, Gisborne 4010, New Zealand
- Medical Imaging Research Centre, The University of Auckland, Auckland 1010, New Zealand
- Centre for Co-Created Ageing Research, The University of Auckland, Auckland 1010, New Zealand
| |
Collapse
|