1
|
Huang Z, Deng Z, Ye J, Wang H, Su Y, Li T, Sun H, Cheng J, Chen J, He J, Gu Y, Zhang S, Gu L, Qiao Y. A-Eval: A benchmark for cross-dataset and cross-modality evaluation of abdominal multi-organ segmentation. Med Image Anal 2025; 101:103499. [PMID: 39970528 DOI: 10.1016/j.media.2025.103499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 01/16/2025] [Accepted: 02/06/2025] [Indexed: 02/21/2025]
Abstract
Although deep learning has revolutionized abdominal multi-organ segmentation, its models often struggle with generalization due to training on small-scale, specific datasets and modalities. The recent emergence of large-scale datasets may mitigate this issue, but some important questions remain unsolved: Can models trained on these large datasets generalize well across different datasets and imaging modalities? If yes/no, how can we further improve their generalizability? To address these questions, we introduce A-Eval, a benchmark for the cross-dataset and cross-modality Evaluation ('Eval') of Abdominal ('A') multi-organ segmentation, integrating seven datasets across CT and MRI modalities. Our evaluations indicate that significant domain gaps persist despite larger data scales. While increased datasets improve generalization, model performance on unseen data remains inconsistent. Joint training across multiple datasets and modalities enhances generalization, though annotation inconsistencies pose challenges. These findings highlight the need for diverse and well-curated training data across various clinical scenarios and modalities to develop robust medical imaging models. The code and pre-trained models are available at https://github.com/uni-medical/A-Eval.
Collapse
Affiliation(s)
- Ziyan Huang
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China; School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200000, China
| | - Zhongying Deng
- Shanghai Artificial Intelligence Laboratory, Shanghai, 200000, China; Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, CB2 1TN, United Kingdom
| | - Jin Ye
- Shanghai Artificial Intelligence Laboratory, Shanghai, 200000, China
| | - Haoyu Wang
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China; School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200000, China
| | - Yanzhou Su
- Shanghai Artificial Intelligence Laboratory, Shanghai, 200000, China
| | - Tianbin Li
- Shanghai Artificial Intelligence Laboratory, Shanghai, 200000, China
| | - Hui Sun
- Shanghai Artificial Intelligence Laboratory, Shanghai, 200000, China
| | - Junlong Cheng
- Shanghai Artificial Intelligence Laboratory, Shanghai, 200000, China
| | - Jianpin Chen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200000, China
| | - Junjun He
- Shanghai Artificial Intelligence Laboratory, Shanghai, 200000, China
| | - Yun Gu
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Shaoting Zhang
- Shanghai Artificial Intelligence Laboratory, Shanghai, 200000, China
| | - Lixu Gu
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China; School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| | - Yu Qiao
- Shanghai Artificial Intelligence Laboratory, Shanghai, 200000, China.
| |
Collapse
|
2
|
Baek S, Ye DH, Lee O. UNet-based multi-organ segmentation in photon counting CT using virtual monoenergetic images. Med Phys 2025; 52:481-488. [PMID: 39374095 DOI: 10.1002/mp.17440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 09/11/2024] [Accepted: 09/16/2024] [Indexed: 10/09/2024] Open
Abstract
BACKGROUND Multi-organ segmentation aids in disease diagnosis, treatment, and radiotherapy. The recently emerged photon counting detector-based CT (PCCT) provides spectral information of the organs and the background tissue and may improve segmentation performance. PURPOSE We propose UNet-based multi-organ segmentation in PCCT using virtual monoenergetic images (VMI) to exploit spectral information effectively. METHODS The proposed method consists of the following steps: Noise reduction in bin-wise images, image-based material decomposition, generating VMIs, and deep learning-based segmentation. VMIs are synthesized for various x-ray energies using basis images. The UNet-based networks (3D UNet, Swin UNETR) were used for segmentation, and dice similarity coefficients (DSC) and 3D visualization of the segmented result were evaluation indicators. We validated the proposed method for the liver, pancreas, and spleen segmentation using abdominal phantoms from 55 subjects for dual- and quad-energy bins. We compared it to the conventional PCCT-based segmentation, which uses only the (noise-reduced) bin-wise images. The experiments were conducted on two cases by adjusting the dose levels. RESULTS The proposed method improved the training stability for most cases. With the proposed method, the average DSC for the three organs slightly increased from 0.933 to 0.95, and the standard deviation decreased from 0.066 to 0.047, for example, in the low dose case (using VMIs v.s. bin-wise images from dual-energy bins; 3D UNet). CONCLUSIONS The proposed method using VMIs improves training stability for multi-organ segmentation in PCCT, particularly when the number of energy bins is small.
Collapse
Affiliation(s)
- Sumin Baek
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology, Daegu, Republic of Korea
| | - Dong Hye Ye
- Department of Computer Science, College of Arts & Science, Georgia State University, Atlanta, Georgia, USA
| | - Okkyun Lee
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology, Daegu, Republic of Korea
| |
Collapse
|