1
|
Xu W, Li C, Bian Y, Meng Q, Zhu W, Shi F, Chen X, Shao C, Xiang D. Cross-Modal Consistency for Single-Modal MR Image Segmentation. IEEE Trans Biomed Eng 2024; 71:2557-2567. [PMID: 38512744 DOI: 10.1109/tbme.2024.3380058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
OBJECTIVE Multi-modal magnetic resonance (MR) image segmentation is an important task in disease diagnosis and treatment, but it is usually difficult to obtain multiple modalities for a single patient in clinical applications. To address these issues, a cross-modal consistency framework is proposed for a single-modal MR image segmentation. METHODS To enable single-modal MR image segmentation in the inference stage, a weighted cross-entropy loss and a pixel-level feature consistency loss are proposed to train the target network with the guidance of the teacher network and the auxiliary network. To fuse dual-modal MR images in the training stage, the cross-modal consistency is measured according to Dice similarity entropy loss and Dice similarity contrastive loss, so as to maximize the prediction similarity of the teacher network and the auxiliary network. To reduce the difference in image contrast between different MR images for the same organs, a contrast alignment network is proposed to align input images with different contrasts to reference images with a good contrast. RESULTS Comprehensive experiments have been performed on a publicly available prostate dataset and an in-house pancreas dataset to verify the effectiveness of the proposed method. Compared to state-of-the-art methods, the proposed method can achieve better segmentation. CONCLUSION The proposed image segmentation method can fuse dual-modal MR images in the training stage and only need one-modal MR images in the inference stage. SIGNIFICANCE The proposed method can be used in routine clinical occasions when only single-modal MR image with variable contrast is available for a patient.
Collapse
|
2
|
Jaakkola MK, Rantala M, Jalo A, Saari T, Hentilä J, Helin JS, Nissinen TA, Eskola O, Rajander J, Virtanen KA, Hannukainen JC, López-Picón F, Klén R. Segmentation of Dynamic Total-Body [ 18F]-FDG PET Images Using Unsupervised Clustering. Int J Biomed Imaging 2023; 2023:3819587. [PMID: 38089593 PMCID: PMC10715853 DOI: 10.1155/2023/3819587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 11/01/2023] [Accepted: 11/17/2023] [Indexed: 10/17/2024] Open
Abstract
Clustering time activity curves of PET images have been used to separate clinically relevant areas of the brain or tumours. However, PET image segmentation in multiorgan level is much less studied due to the available total-body data being limited to animal studies. Now, the new PET scanners providing the opportunity to acquire total-body PET scans also from humans are becoming more common, which opens plenty of new clinically interesting opportunities. Therefore, organ-level segmentation of PET images has important applications, yet it lacks sufficient research. In this proof of concept study, we evaluate if the previously used segmentation approaches are suitable for segmenting dynamic human total-body PET images in organ level. Our focus is on general-purpose unsupervised methods that are independent of external data and can be used for all tracers, organisms, and health conditions. Additional anatomical image modalities, such as CT or MRI, are not used, but the segmentation is done purely based on the dynamic PET images. The tested methods are commonly used building blocks of the more sophisticated methods rather than final methods as such, and our goal is to evaluate if these basic tools are suited for the arising human total-body PET image segmentation. First, we excluded methods that were computationally too demanding for the large datasets from human total-body PET scanners. These criteria filtered out most of the commonly used approaches, leaving only two clustering methods, k-means and Gaussian mixture model (GMM), for further analyses. We combined k-means with two different preprocessing approaches, namely, principal component analysis (PCA) and independent component analysis (ICA). Then, we selected a suitable number of clusters using 10 images. Finally, we tested how well the usable approaches segment the remaining PET images in organ level, highlight the best approaches together with their limitations, and discuss how further research could tackle the observed shortcomings. In this study, we utilised 40 total-body [18F] fluorodeoxyglucose PET images of rats to mimic the coming large human PET images and a few actual human total-body images to ensure that our conclusions from the rat data generalise to the human data. Our results show that ICA combined with k-means has weaker performance than the other two computationally usable approaches and that certain organs are easier to segment than others. While GMM performed sufficiently, it was by far the slowest one among the tested approaches, making k-means combined with PCA the most promising candidate for further development. However, even with the best methods, the mean Jaccard index was slightly below 0.5 for the easiest tested organ and below 0.2 for the most challenging organ. Thus, we conclude that there is a lack of accurate and computationally light general-purpose segmentation method that can analyse dynamic total-body PET images.
Collapse
Affiliation(s)
- Maria K. Jaakkola
- Turku PET Centre, University of Turku, Turku, Finland
- Turku PET Centre, Turku University Hospital, Turku, Finland
| | - Maria Rantala
- Turku PET Centre, University of Turku, Turku, Finland
| | - Anna Jalo
- MediCity Research Laboratory, University of Turku, Turku, Finland
- PET Preclinical Laboratory, Turku PET Centre, University of Turku, Turku, Finland
| | - Teemu Saari
- Turku PET Centre, University of Turku, Turku, Finland
- Turku PET Centre, Turku University Hospital, Turku, Finland
| | | | - Jatta S. Helin
- MediCity Research Laboratory, University of Turku, Turku, Finland
- PET Preclinical Laboratory, Turku PET Centre, University of Turku, Turku, Finland
| | - Tuuli A. Nissinen
- MediCity Research Laboratory, University of Turku, Turku, Finland
- PET Preclinical Laboratory, Turku PET Centre, University of Turku, Turku, Finland
| | - Olli Eskola
- Radiopharmaceutical Chemistry Laboratory, Turku PET Centre, University of Turku, Turku, Finland
| | - Johan Rajander
- Accelerator Laboratory, Turku PET Centre, Abo Akademi University, Turku, Finland
| | - Kirsi A. Virtanen
- Turku PET Centre, University of Turku, Turku, Finland
- Turku PET Centre, Turku University Hospital, Turku, Finland
| | | | - Francisco López-Picón
- Turku PET Centre, University of Turku, Turku, Finland
- MediCity Research Laboratory, University of Turku, Turku, Finland
- PET Preclinical Laboratory, Turku PET Centre, University of Turku, Turku, Finland
| | - Riku Klén
- Turku PET Centre, University of Turku, Turku, Finland
- Turku PET Centre, Turku University Hospital, Turku, Finland
| |
Collapse
|
3
|
Xu W, Bian Y, Lu Y, Meng Q, Zhu W, Shi F, Chen X, Shao C, Xiang D. Semi-supervised interactive fusion network for MR image segmentation. Med Phys 2023; 50:1586-1600. [PMID: 36345139 DOI: 10.1002/mp.16072] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 10/06/2022] [Accepted: 10/15/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Medical image segmentation is an important task in the diagnosis and treatment of cancers. The low contrast and highly flexible anatomical structure make it challenging to accurately segment the organs or lesions. PURPOSE To improve the segmentation accuracy of the organs or lesions in magnetic resonance (MR) images, which can be useful in clinical diagnosis and treatment of cancers. METHODS First, a selective feature interaction (SFI) module is designed to selectively extract the similar features of the sequence images based on the similarity interaction. Second, a multi-scale guided feature reconstruction (MGFR) module is designed to reconstruct low-level semantic features and focus on small targets and the edges of the pancreas. Third, to reduce manual annotation of large amounts of data, a semi-supervised training method is also proposed. Uncertainty estimation is used to further improve the segmentation accuracy. RESULTS Three hundred ninety-five 3D MR images from 395 patients with pancreatic cancer, 259 3D MR images from 259 patients with brain tumors, and four-fold cross-validation strategy are used to evaluate the proposed method. Compared to state-of-the-art deep learning segmentation networks, the proposed method can achieve better segmentation of pancreas or tumors in MR images. CONCLUSIONS SFI-Net can fuse dual sequence MR images for abnormal pancreas or tumor segmentation. The proposed semi-supervised strategy can further improve the performance of SFI-Net.
Collapse
Affiliation(s)
- Wenxuan Xu
- School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| | - Yun Bian
- Department of Radiology, Changhai Hospital, The Navy Military Medical University, Shanghai, China
| | - Yuxuan Lu
- School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| | - Qingquan Meng
- School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| | - Weifang Zhu
- School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| | - Fei Shi
- School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| | - Xinjian Chen
- School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| | - Chengwei Shao
- Department of Radiology, Changhai Hospital, The Navy Military Medical University, Shanghai, China
| | - Dehui Xiang
- School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| |
Collapse
|
4
|
Neural Network-Based Dynamic Segmentation and Weighted Integrated Matching of Cross-Media Piano Performance Audio Recognition and Retrieval Algorithm. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9323646. [PMID: 35602641 PMCID: PMC9122679 DOI: 10.1155/2022/9323646] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 04/22/2022] [Accepted: 04/26/2022] [Indexed: 11/18/2022]
Abstract
This paper presents a dynamic segmentation and weighted comprehensive matching algorithm based on neural networks for cross-media piano performance audio recognition and retrieval. The 3D convolutional neural network process is separated to compress the network parameters and improve the computational speed. Skip connection and layer-wise learning rate solve the problem that the separated network is challenging to train. The piano performance audio recognition is facilitated by shuffle operation. In pattern recognition, music retrieval algorithms are gaining more and more attention due to their ease of implementation and efficiency. However, the problems of imprecise dynamic note segmentation and inconsistent matching templates directly affect the accuracy of the MIR algorithm. We propose a dynamic threshold-based segmentation and weighted comprehensive matching algorithm to solve these problems. The amplitude difference step is dynamically set, and the notes are segmented according to the changing threshold to improve the accuracy of note segmentation. A standard score frequency is used to transform the pitch template to achieve input normalization to enhance the accuracy of matching. Direct matching and DTW matching are fused to improve the adaptability and robustness of the algorithm. Finally, the effectiveness of the method is experimentally demonstrated. This paper implements the data collection and processing, audio recognition, and retrieval algorithm for cross-media piano performance big data through three main modules: the collection, processing, and storage module of cross-media piano performance big data, the building module of audio recognition of cross-media piano performance big data, and the dynamic precision module of cross-media piano performance big data.
Collapse
|
5
|
Lei T, Wang R, Zhang Y, Wan Y, Liu C, Nandi AK. DefED-Net: Deformable Encoder-Decoder Network for Liver and Liver Tumor Segmentation. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3059780] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
6
|
Chen S, Zhong X, Dorn S, Ravikumar N, Tao Q, Huang X, Lell M, Kachelriess M, Maier A. Improving Generalization Capability of Multiorgan Segmentation Models Using Dual-Energy CT. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3055199] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
7
|
Iantsen A, Ferreira M, Lucia F, Jaouen V, Reinhold C, Bonaffini P, Alfieri J, Rovira R, Masson I, Robin P, Mervoyer A, Rousseau C, Kridelka F, Decuypere M, Lovinfosse P, Pradier O, Hustinx R, Schick U, Visvikis D, Hatt M. Convolutional neural networks for PET functional volume fully automatic segmentation: development and validation in a multi-center setting. Eur J Nucl Med Mol Imaging 2021; 48:3444-3456. [PMID: 33772335 PMCID: PMC8440243 DOI: 10.1007/s00259-021-05244-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 02/07/2021] [Indexed: 11/12/2022]
Abstract
Purpose In this work, we addressed fully automatic determination of tumor functional uptake from positron emission tomography (PET) images without relying on other image modalities or additional prior constraints, in the context of multicenter images with heterogeneous characteristics. Methods In cervical cancer, an additional challenge is the location of the tumor uptake near or even stuck to the bladder. PET datasets of 232 patients from five institutions were exploited. To avoid unreliable manual delineations, the ground truth was generated with a semi-automated approach: a volume containing the tumor and excluding the bladder was first manually determined, then a well-validated, semi-automated approach relying on the Fuzzy locally Adaptive Bayesian (FLAB) algorithm was applied to generate the ground truth. Our model built on the U-Net architecture incorporates residual blocks with concurrent spatial squeeze and excitation modules, as well as learnable non-linear downsampling and upsampling blocks. Experiments relied on cross-validation (four institutions for training and validation, and the fifth for testing). Results The model achieved good Dice similarity coefficient (DSC) with little variability across institutions (0.80 ± 0.03), with higher recall (0.90 ± 0.05) than precision (0.75 ± 0.05) and improved results over the standard U-Net (DSC 0.77 ± 0.05, recall 0.87 ± 0.02, precision 0.74 ± 0.08). Both vastly outperformed a fixed threshold at 40% of SUVmax (DSC 0.33 ± 0.15, recall 0.52 ± 0.17, precision 0.30 ± 0.16). In all cases, the model could determine the tumor uptake without including the bladder. Neither shape priors nor anatomical information was required to achieve efficient training. Conclusion The proposed method could facilitate the deployment of a fully automated radiomics pipeline in such a challenging multicenter context. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05244-z.
Collapse
Affiliation(s)
- Andrei Iantsen
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France.
| | - Marta Ferreira
- GIGA-CRC in vivo Imaging, University of Liège, Liège, Belgium
| | - Francois Lucia
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Vincent Jaouen
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Caroline Reinhold
- Department of Radiology, McGill University Health Centre (MUHC), Montreal, Canada
| | - Pietro Bonaffini
- Department of Radiology, McGill University Health Centre (MUHC), Montreal, Canada
| | - Joanne Alfieri
- Department of Radiation Oncology, McGill University Health Centre (MUHC), Montreal, Canada
| | - Ramon Rovira
- Gynecology Oncology and Laparoscopy Department, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
| | - Ingrid Masson
- Department of Radiation Oncology, Institut de Cancérologie de l'Ouest (ICO), Nantes, France
| | - Philippe Robin
- Nuclear Medicine Department, University Hospital, Brest, France
| | - Augustin Mervoyer
- Department of Radiation Oncology, Institut de Cancérologie de l'Ouest (ICO), Nantes, France
| | - Caroline Rousseau
- Nuclear Medicine Department, Institut de Cancérologie de l'Ouest (ICO), Nantes, France
| | - Frédéric Kridelka
- Division of Oncological Gynecology, University Hospital of Liège, Liège, Belgium
| | - Marjolein Decuypere
- Division of Oncological Gynecology, University Hospital of Liège, Liège, Belgium
| | - Pierre Lovinfosse
- Division of Nuclear Medicine and Oncological Imaging, University Hospital of Liège, Liège, Belgium
| | | | - Roland Hustinx
- GIGA-CRC in vivo Imaging, University of Liège, Liège, Belgium
| | - Ulrike Schick
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | | | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| |
Collapse
|
8
|
Scarinci I, Valente M, Pérez P. SOCH. An ML-based pipeline for PET automatic segmentation by heuristic algorithms means. INFORMATICS IN MEDICINE UNLOCKED 2020. [DOI: 10.1016/j.imu.2020.100481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022] Open
|