1
|
Pham XL, Luu MH, van Walsum T, Mai HS, Klein S, Le NH, Chu DT. CMAN: Cascaded Multi-scale Spatial Channel Attention-guided Network for large 3D deformable registration of liver CT images. Med Image Anal 2024; 96:103212. [PMID: 38830326 DOI: 10.1016/j.media.2024.103212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 03/27/2024] [Accepted: 05/17/2024] [Indexed: 06/05/2024]
Abstract
Deformable image registration is an essential component of medical image analysis and plays an irreplaceable role in clinical practice. In recent years, deep learning-based registration methods have demonstrated significant improvements in convenience, robustness and execution time compared to traditional algorithms. However, registering images with large displacements, such as those of the liver organ, remains underexplored and challenging. In this study, we present a novel convolutional neural network (CNN)-based unsupervised learning registration method, Cascaded Multi-scale Spatial-Channel Attention-guided Network (CMAN), which addresses the challenge of large deformation fields using a double coarse-to-fine registration approach. The main contributions of CMAN include: (i) local coarse-to-fine registration in the base network, which generates the displacement field for each resolution and progressively propagates these local deformations as auxiliary information for the final deformation field; (ii) global coarse-to-fine registration, which stacks multiple base networks for sequential warping, thereby incorporating richer multi-layer contextual details into the final deformation field; (iii) integration of the spatial-channel attention module in the decoder stage, which better highlights important features and improves the quality of feature maps. The proposed network was trained using two public datasets and evaluated on another public dataset as well as a private dataset across several experimental scenarios. We compared CMAN with four state-of-the-art CNN-based registration methods and two well-known traditional algorithms. The results show that the proposed double coarse-to-fine registration strategy outperforms other methods in most registration evaluation metrics. In conclusion, CMAN can effectively handle the large-deformation registration problem and show potential for application in clinical practice. The source code is made publicly available at https://github.com/LocPham263/CMAN.git.
Collapse
Affiliation(s)
- Xuan Loc Pham
- FET, VNU University of Engineering and Technology, Hanoi, Viet Nam; Diagnostic Image Analysis Group, Radboud UMC, Nijmegen, The Netherlands
| | - Manh Ha Luu
- FET, VNU University of Engineering and Technology, Hanoi, Viet Nam; AVITECH, VNU University of Engineering and Technology, Hanoi, Viet Nam; Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands.
| | - Theo van Walsum
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Hong Son Mai
- Department of Nuclear Medicine, Hospital 108, Hanoi, Viet Nam
| | - Stefan Klein
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Ngoc Ha Le
- Department of Nuclear Medicine, Hospital 108, Hanoi, Viet Nam
| | - Duc Trinh Chu
- FET, VNU University of Engineering and Technology, Hanoi, Viet Nam
| |
Collapse
|
2
|
Ou J, Jiang L, Bai T, Zhan P, Liu R, Xiao H. ResTransUnet: An effective network combined with Transformer and U-Net for liver segmentation in CT scans. Comput Biol Med 2024; 177:108625. [PMID: 38823365 DOI: 10.1016/j.compbiomed.2024.108625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Revised: 04/15/2024] [Accepted: 05/18/2024] [Indexed: 06/03/2024]
Abstract
Liver segmentation is a fundamental prerequisite for the diagnosis and surgical planning of hepatocellular carcinoma. Traditionally, the liver contour is drawn manually by radiologists using a slice-by-slice method. However, this process is time-consuming and error-prone, depending on the radiologist's experience. In this paper, we propose a new end-to-end automatic liver segmentation framework, named ResTransUNet, which exploits the transformer's ability to capture global context for remote interactions and spatial relationships, as well as the excellent performance of the original U-Net architecture. The main contribution of this paper lies in proposing a novel fusion network that combines Unet and Transformer architectures. In the encoding structure, a dual-path approach is utilized, where features are extracted separately using both convolutional neural networks (CNNs) and Transformer networks. Additionally, an effective feature enhancement unit is designed to transfer the global features extracted by the Transformer network to the CNN for feature enhancement. This model aims to address the drawbacks of traditional Unet-based methods, such as feature loss during encoding and poor capture of global features. Moreover, it avoids the disadvantages of pure Transformer models, which suffer from large parameter sizes and high computational complexity. The experimental results on the LiTS2017 dataset demonstrate remarkable performance for our proposed model, with Dice coefficients, volumetric overlap error (VOE), and relative volume difference (RVD) values for liver segmentation reaching 0.9535, 0.0804, and -0.0007, respectively. Furthermore, to further validate the model's generalization capability, we conducted tests on the 3Dircadb, Chaos, and Sliver07 datasets. The experimental results demonstrate that the proposed method outperforms other closely related models with higher liver segmentation accuracy. In addition, significant improvements can be achieved by applying our method when handling liver segmentation with small and discontinuous liver regions, as well as blurred liver boundaries. The code is available at the website: https://github.com/Jouiry/ResTransUNet.
Collapse
Affiliation(s)
- Jiajie Ou
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| | - Linfeng Jiang
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China; School of Computing and College of Design and Engineering, National University of Singapore, Singapore.
| | - Ting Bai
- School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Peidong Zhan
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| | - Ruihua Liu
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| | - Hanguang Xiao
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| |
Collapse
|
3
|
Li S, Wang H, Meng Y, Zhang C, Song Z. Multi-organ segmentation: a progressive exploration of learning paradigms under scarce annotation. Phys Med Biol 2024; 69:11TR01. [PMID: 38479023 DOI: 10.1088/1361-6560/ad33b5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 03/13/2024] [Indexed: 05/21/2024]
Abstract
Precise delineation of multiple organs or abnormal regions in the human body from medical images plays an essential role in computer-aided diagnosis, surgical simulation, image-guided interventions, and especially in radiotherapy treatment planning. Thus, it is of great significance to explore automatic segmentation approaches, among which deep learning-based approaches have evolved rapidly and witnessed remarkable progress in multi-organ segmentation. However, obtaining an appropriately sized and fine-grained annotated dataset of multiple organs is extremely hard and expensive. Such scarce annotation limits the development of high-performance multi-organ segmentation models but promotes many annotation-efficient learning paradigms. Among these, studies on transfer learning leveraging external datasets, semi-supervised learning including unannotated datasets and partially-supervised learning integrating partially-labeled datasets have led the dominant way to break such dilemmas in multi-organ segmentation. We first review the fully supervised method, then present a comprehensive and systematic elaboration of the 3 abovementioned learning paradigms in the context of multi-organ segmentation from both technical and methodological perspectives, and finally summarize their challenges and future trends.
Collapse
Affiliation(s)
- Shiman Li
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Haoran Wang
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Yucong Meng
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Chenxi Zhang
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| |
Collapse
|
4
|
Rong Y, Chen Q, Fu Y, Yang X, Al-Hallaq HA, Wu QJ, Yuan L, Xiao Y, Cai B, Latifi K, Benedict SH, Buchsbaum JC, Qi XS. NRG Oncology Assessment of Artificial Intelligence Deep Learning-Based Auto-segmentation for Radiation Therapy: Current Developments, Clinical Considerations, and Future Directions. Int J Radiat Oncol Biol Phys 2024; 119:261-280. [PMID: 37972715 PMCID: PMC11023777 DOI: 10.1016/j.ijrobp.2023.10.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 09/16/2023] [Accepted: 10/14/2023] [Indexed: 11/19/2023]
Abstract
Deep learning neural networks (DLNN) in Artificial intelligence (AI) have been extensively explored for automatic segmentation in radiotherapy (RT). In contrast to traditional model-based methods, data-driven AI-based models for auto-segmentation have shown high accuracy in early studies in research settings and controlled environment (single institution). Vendor-provided commercial AI models are made available as part of the integrated treatment planning system (TPS) or as a stand-alone tool that provides streamlined workflow interacting with the main TPS. These commercial tools have drawn clinics' attention thanks to their significant benefit in reducing the workload from manual contouring and shortening the duration of treatment planning. However, challenges occur when applying these commercial AI-based segmentation models to diverse clinical scenarios, particularly in uncontrolled environments. Contouring nomenclature and guideline standardization has been the main task undertaken by the NRG Oncology. AI auto-segmentation holds the potential clinical trial participants to reduce interobserver variations, nomenclature non-compliance, and contouring guideline deviations. Meanwhile, trial reviewers could use AI tools to verify contour accuracy and compliance of those submitted datasets. In recognizing the growing clinical utilization and potential of these commercial AI auto-segmentation tools, NRG Oncology has formed a working group to evaluate the clinical utilization and potential of commercial AI auto-segmentation tools. The group will assess in-house and commercially available AI models, evaluation metrics, clinical challenges, and limitations, as well as future developments in addressing these challenges. General recommendations are made in terms of the implementation of these commercial AI models, as well as precautions in recognizing the challenges and limitations.
Collapse
Affiliation(s)
- Yi Rong
- Mayo Clinic Arizona, Phoenix, AZ
| | - Quan Chen
- City of Hope Comprehensive Cancer Center Duarte, CA
| | - Yabo Fu
- Memorial Sloan Kettering Cancer Center, Commack, NY
| | | | | | | | - Lulin Yuan
- Virginia Commonwealth University, Richmond, VA
| | - Ying Xiao
- University of Pennsylvania/Abramson Cancer Center, Philadelphia, PA
| | - Bin Cai
- The University of Texas Southwestern Medical Center, Dallas, TX
| | | | - Stanley H Benedict
- University of California Davis Comprehensive Cancer Center, Sacramento, CA
| | | | - X Sharon Qi
- University of California Los Angeles, Los Angeles, CA
| |
Collapse
|
5
|
Zhao J, Jiang T, Lin Y, Chan LC, Chan PK, Wen C, Chen H. Adaptive Fusion of Deep Learning With Statistical Anatomical Knowledge for Robust Patella Segmentation From CT Images. IEEE J Biomed Health Inform 2024; 28:2842-2853. [PMID: 38446653 DOI: 10.1109/jbhi.2024.3372576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
Kneeosteoarthritis (KOA), as a leading joint disease, can be decided by examining the shapes of patella to spot potential abnormal variations. To assist doctors in the diagnosis of KOA, a robust automatic patella segmentation method is highly demanded in clinical practice. Deep learning methods, especially convolutional neural networks (CNNs) have been widely applied to medical image segmentation in recent years. Nevertheless, poor image quality and limited data still impose challenges to segmentation via CNNs. On the other hand, statistical shape models (SSMs) can generate shape priors which give anatomically reliable segmentation to varying instances. Thus, in this work, we propose an adaptive fusion framework, explicitly combining deep neural networks and anatomical knowledge from SSM for robust patella segmentation. Our adaptive fusion framework will accordingly adjust the weight of segmentation candidates in fusion based on their segmentation performance. We also propose a voxel-wise refinement strategy to make the segmentation of CNNs more anatomically correct. Extensive experiments and thorough assessment have been conducted on various mainstream CNN backbones for patella segmentation in low-data regimes, which demonstrate that our framework can be flexibly attached to a CNN model, significantly improving its performance when labeled training data are limited and input image data are of poor quality.
Collapse
|
6
|
Zhang P, Gao C, Huang Y, Chen X, Pan Z, Wang L, Dong D, Li S, Qi X. Artificial intelligence in liver imaging: methods and applications. Hepatol Int 2024; 18:422-434. [PMID: 38376649 DOI: 10.1007/s12072-023-10630-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 12/18/2023] [Indexed: 02/21/2024]
Abstract
Liver disease is regarded as one of the major health threats to humans. Radiographic assessments hold promise in terms of addressing the current demands for precisely diagnosing and treating liver diseases, and artificial intelligence (AI), which excels at automatically making quantitative assessments of complex medical image characteristics, has made great strides regarding the qualitative interpretation of medical imaging by clinicians. Here, we review the current state of medical-imaging-based AI methodologies and their applications concerning the management of liver diseases. We summarize the representative AI methodologies in liver imaging with focusing on deep learning, and illustrate their promising clinical applications across the spectrum of precise liver disease detection, diagnosis and treatment. We also address the current challenges and future perspectives of AI in liver imaging, with an emphasis on feature interpretability, multimodal data integration and multicenter study. Taken together, it is revealed that AI methodologies, together with the large volume of available medical image data, might impact the future of liver disease care.
Collapse
Affiliation(s)
- Peng Zhang
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Chaofei Gao
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Yifei Huang
- Department of Gastroenterology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xiangyi Chen
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Zhuoshi Pan
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Lan Wang
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| | - Shao Li
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China.
| | - Xiaolong Qi
- Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Nurturing Center of Jiangsu Province for State Laboratory of AI Imaging & Interventional Radiology, Southeast University, Nanjing, China.
| |
Collapse
|
7
|
Yao W, Bai J, Liao W, Chen Y, Liu M, Xie Y. From CNN to Transformer: A Review of Medical Image Segmentation Models. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-00981-7. [PMID: 38438696 DOI: 10.1007/s10278-024-00981-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 11/13/2023] [Accepted: 11/14/2023] [Indexed: 03/06/2024]
Abstract
Medical image segmentation is an important step in medical image analysis, especially as a crucial prerequisite for efficient disease diagnosis and treatment. The use of deep learning for image segmentation has become a prevalent trend. The widely adopted approach currently is U-Net and its variants. Moreover, with the remarkable success of pre-trained models in natural language processing tasks, transformer-based models like TransUNet have achieved desirable performance on multiple medical image segmentation datasets. Recently, the Segment Anything Model (SAM) and its variants have also been attempted for medical image segmentation. In this paper, we conduct a survey of the most representative seven medical image segmentation models in recent years. We theoretically analyze the characteristics of these models and quantitatively evaluate their performance on Tuberculosis Chest X-rays, Ovarian Tumors, and Liver Segmentation datasets. Finally, we discuss the main challenges and future trends in medical image segmentation. Our work can assist researchers in the related field to quickly establish medical segmentation models tailored to specific regions.
Collapse
Affiliation(s)
- Wenjian Yao
- Network and Data Security Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, 610054, Chengdu, China
| | - Jiajun Bai
- Network and Data Security Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, 610054, Chengdu, China
| | - Wei Liao
- Department of Obstetrics and Gynaecology, Deyang People's Hospital, 618000, Deyang, China
| | - Yuheng Chen
- Network and Data Security Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, 610054, Chengdu, China
| | - Mengjuan Liu
- Network and Data Security Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, 610054, Chengdu, China.
| | - Yao Xie
- Department of Obstetrics and Gynaecology, Sichuan Provincial People's Hospital, School of Medicine, University of Electronic Science and Technology of China, Chengdu, China.
- Chinese Academy of Sciences Sichuan Translational Medicine Research Hospital, 610072, Chengdu, China.
| |
Collapse
|
8
|
Xia E, He J, Liao Z. MFA-ICPS: Semi-supervised medical image segmentation with improved cross pseudo supervision and multi-dimensional feature attention. Med Phys 2024; 51:1918-1930. [PMID: 37715995 DOI: 10.1002/mp.16740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 07/31/2023] [Accepted: 09/01/2023] [Indexed: 09/18/2023] Open
Abstract
BACKGROUND In the medical field, medical image segmentation plays a pivotal role in facilitating disease evaluation and supporting treatment decision-making for doctors. Recently, deep learning methods have been employed in the field of medical image segmentation. However, the manual annotation of a large number of reliable labels is a costly and time-consuming process. PURPOSE To address this challenge, a semi-supervised learning framework is required to alleviate the burden of reliable labeling and enhance segmentation accuracy in challenging areas of medical images. METHODS Therefore, this paper presents MFA-ICPS framework, a semi-supervised learning framework based on the improved cross pseudo supervision (ICPS) and multi-dimensional feature attention (MFA) modules. Medical images inevitably contain some noise that may affect the segmentation accuracy, so the proposed framework addresses this challenge by introducing noise disturbance, combining ICPS and MFA modules, and using pseudo-segmentation maps and MFA maps to maintain the consistency at both the output and feature levels. RESULTS In the experiments, MFA-ICPS framework accurately obtains the following performances on the left atrial dataset: Dice, Jaccard, 95HD, and ASD values are90.89 % $90.89\%$ ,83.40 % $83.40\%$ , 6.00 and 1.94 mm, respectively. And on the pancreas-CT dataset, the following performances are accurately obtained: Dice, Jaccard, 95HD, and ASD values are79.55 % $79.55\%$ ,66.87 % $66.87\%$ , 7.67 and 1.65 mm, respectively. CONCLUSIONS The segmentation performance of MFA-ICPS framework on different medical datasets demonstrates its remarkable capability to significantly enhance medical image segmentation.
Collapse
Affiliation(s)
- En Xia
- Chengdu University of Technology, College of Computer Science and Cyber Security (Oxford Brooks College), Chengdu, China
| | - Jianjun He
- Chengdu University of Technology, College of Computer Science and Cyber Security (Oxford Brooks College), Chengdu, China
| | - Zhangquan Liao
- Chengdu University of Technology, College of Geophysics, Chengdu, China
| |
Collapse
|
9
|
Huang Y, Yang X, Liu L, Zhou H, Chang A, Zhou X, Chen R, Yu J, Chen J, Chen C, Liu S, Chi H, Hu X, Yue K, Li L, Grau V, Fan DP, Dong F, Ni D. Segment anything model for medical images? Med Image Anal 2024; 92:103061. [PMID: 38086235 DOI: 10.1016/j.media.2023.103061] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 09/28/2023] [Accepted: 12/05/2023] [Indexed: 01/12/2024]
Abstract
The Segment Anything Model (SAM) is the first foundation model for general image segmentation. It has achieved impressive results on various natural image segmentation tasks. However, medical image segmentation (MIS) is more challenging because of the complex modalities, fine anatomical structures, uncertain and complex object boundaries, and wide-range object scales. To fully validate SAM's performance on medical data, we collected and sorted 53 open-source datasets and built a large medical segmentation dataset with 18 modalities, 84 objects, 125 object-modality paired targets, 1050K 2D images, and 6033K masks. We comprehensively analyzed different models and strategies on the so-called COSMOS 1050K dataset. Our findings mainly include the following: (1) SAM showed remarkable performance in some specific objects but was unstable, imperfect, or even totally failed in other situations. (2) SAM with the large ViT-H showed better overall performance than that with the small ViT-B. (3) SAM performed better with manual hints, especially box, than the Everything mode. (4) SAM could help human annotation with high labeling quality and less time. (5) SAM was sensitive to the randomness in the center point and tight box prompts, and may suffer from a serious performance drop. (6) SAM performed better than interactive methods with one or a few points, but will be outpaced as the number of points increases. (7) SAM's performance correlated to different factors, including boundary complexity, intensity differences, etc. (8) Finetuning the SAM on specific medical tasks could improve its average DICE performance by 4.39% and 6.68% for ViT-B and ViT-H, respectively. Codes and models are available at: https://github.com/yuhoo0302/Segment-Anything-Model-for-Medical-Images. We hope that this comprehensive report can help researchers explore the potential of SAM applications in MIS, and guide how to appropriately use and develop SAM.
Collapse
Affiliation(s)
- Yuhao Huang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Xin Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Lian Liu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Han Zhou
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Ao Chang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Xinrui Zhou
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Rusi Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Junxuan Yu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Jiongquan Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Chaoyu Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Sijing Liu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | | | - Xindi Hu
- Shenzhen RayShape Medical Technology Co., Ltd, Shenzhen, China
| | - Kejuan Yue
- Hunan First Normal University, Changsha, China
| | - Lei Li
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Vicente Grau
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Deng-Ping Fan
- Computer Vision Lab (CVL), ETH Zurich, Zurich, Switzerland
| | - Fajin Dong
- Ultrasound Department, the Second Clinical Medical College, Jinan University, China; First Affiliated Hospital, Southern University of Science and Technology, Shenzhen People's Hospital, Shenzhen, China.
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China.
| |
Collapse
|
10
|
Reska D, Kretowski M. GPU-accelerated lung CT segmentation based on level sets and texture analysis. Sci Rep 2024; 14:1444. [PMID: 38228773 PMCID: PMC10792028 DOI: 10.1038/s41598-024-51452-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 01/05/2024] [Indexed: 01/18/2024] Open
Abstract
This paper presents a novel semi-automatic method for lung segmentation in thoracic CT datasets. The fully three-dimensional algorithm is based on a level set representation of an active surface and integrates texture features to improve its robustness. The method's performance is enhanced by the graphics processing unit (GPU) acceleration. The segmentation process starts with a manual initialisation of 2D contours on a few representative slices of the analysed volume. Next, the starting regions for the active surface are generated according to the probability maps of texture features. The active surface is then evolved to give the final segmentation result. The recent implementation employs features based on grey-level co-occurrence matrices and Gabor filters. The algorithm was evaluated on real medical imaging data from the LCTCS 2017 challenge. The results were also compared with the outcomes of other segmentation methods. The proposed approach provided high segmentation accuracy while offering very competitive performance.
Collapse
Affiliation(s)
- Daniel Reska
- Faculty of Computer Science, Bialystok University of Technology, Białystok, Poland.
| | - Marek Kretowski
- Faculty of Computer Science, Bialystok University of Technology, Białystok, Poland
| |
Collapse
|
11
|
Shui Y, Wang Z, Liu B, Wang W, Fu S, Li Y. A three-path network with multi-scale selective feature fusion, edge-inspiring and edge-guiding for liver tumor segmentation. Comput Biol Med 2024; 168:107841. [PMID: 38081117 DOI: 10.1016/j.compbiomed.2023.107841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Revised: 11/04/2023] [Accepted: 12/07/2023] [Indexed: 01/10/2024]
Abstract
Automatic liver tumor segmentation is one of the most important tasks in computer-aided diagnosis and treatment. Deep learning techniques have gained increasing popularity for medical image segmentation in recent years. However, due to the various shapes, sizes, and obscure boundaries of tumors, it is still difficult to automatically extract tumor regions from CT images. Based on the complementarity of edge detection and region segmentation, a three-path structure with multi-scale selective feature fusion (MSFF) module, multi-channel feature fusion (MFF) module, edge-inspiring (EI) module, and edge-guiding (EG) module is proposed in this paper. The MSFF module includes the process of generation, fusion, and selection of multi-scale features, which can adaptively correct the response weights in multiple branches to filter redundant information. The MFF module integrates richer hierarchical features to capture targets at different scales. The EI module aggregates high-level semantic information at different levels to obtain fine edge semantics, which is injected into the EG module for representation learning of segmentation features. Experiments on the LiTs2017 dataset show that our proposed method achieves a Dice index of 85.55% and a Jaccard index of 81.11%, which are higher than what can be obtained by the current state-of-the-art methods. Cross-dataset validation experiments conducted on 3Dircadb and Clinical datasets show the generalization and robustness of the proposed method by achieving dice indices of 80.14% and 81.68%, respectively.
Collapse
Affiliation(s)
- Yuanyuan Shui
- School of Mathematics, Shandong University, Jinan, 250100, China
| | - Zhendong Wang
- School of Mathematics, Shandong University, Jinan, 250100, China
| | - Bin Liu
- Department of Intervention Medicine, The Second Hospital of Shandong University, Jinan, 250033, China
| | - Wei Wang
- Department of Intervention Medicine, The Second Hospital of Shandong University, Jinan, 250033, China
| | - Shujun Fu
- School of Mathematics, Shandong University, Jinan, 250100, China; Department of Intervention Medicine, The Second Hospital of Shandong University, Jinan, 250033, China.
| | - Yuliang Li
- Department of Intervention Medicine, The Second Hospital of Shandong University, Jinan, 250033, China.
| |
Collapse
|
12
|
Yang S, Liang Y, Wu S, Sun P, Chen Z. SADSNet: A robust 3D synchronous segmentation network for liver and liver tumors based on spatial attention mechanism and deep supervision. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:707-723. [PMID: 38552134 DOI: 10.3233/xst-230312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2024]
Abstract
Highlights • Introduce a data augmentation strategy to expand the required different morphological data during the training and learning phase, and improve the algorithm's feature learning ability for complex and diverse tumor morphology CT images.• Design attention mechanisms for encoding and decoding paths to extract fine pixel level features, improve feature extraction capabilities, and achieve efficient spatial channel feature fusion.• The deep supervision layer is used to correct and decode the final image data to provide high accuracy of results.• The effectiveness of this method has been affirmed through validation on the LITS, 3DIRCADb, and SLIVER datasets. BACKGROUND Accurately extracting liver and liver tumors from medical images is an important step in lesion localization and diagnosis, surgical planning, and postoperative monitoring. However, the limited number of radiation therapists and a great number of images make this work time-consuming. OBJECTIVE This study designs a spatial attention deep supervised network (SADSNet) for simultaneous automatic segmentation of liver and tumors. METHOD Firstly, self-designed spatial attention modules are introduced at each layer of the encoder and decoder to extract image features at different scales and resolutions, helping the model better capture liver tumors and fine structures. The designed spatial attention module is implemented through two gate signals related to liver and tumors, as well as changing the size of convolutional kernels; Secondly, deep supervision is added behind the three layers of the decoder to assist the backbone network in feature learning and improve gradient propagation, enhancing robustness. RESULTS The method was testing on LITS, 3DIRCADb, and SLIVER datasets. For the liver, it obtained dice similarity coefficients of 97.03%, 96.11%, and 97.40%, surface dice of 81.98%, 82.53%, and 86.29%, 95% hausdorff distances of 8.96 mm, 8.26 mm, and 3.79 mm, and average surface distances of 1.54 mm, 1.19 mm, and 0.81 mm. Additionally, it also achieved precise tumor segmentation, which with dice scores of 87.81% and 87.50%, surface dice of 89.63% and 84.26%, 95% hausdorff distance of 12.96 mm and 16.55 mm, and average surface distances of 1.11 mm and 3.04 mm on LITS and 3DIRCADb, respectively. CONCLUSION The experimental results show that the proposed method is effective and superior to some other methods. Therefore, this method can provide technical support for liver and liver tumor segmentation in clinical practice.
Collapse
Affiliation(s)
- Sijing Yang
- School of Life and Environmental Science, Guilin University of Electronic Technology, Guilin, China
| | - Yongbo Liang
- School of Life and Environmental Science, Guilin University of Electronic Technology, Guilin, China
| | - Shang Wu
- School of Life and Environmental Science, Guilin University of Electronic Technology, Guilin, China
| | - Peng Sun
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, China
| | - Zhencheng Chen
- School of Life and Environmental Science, Guilin University of Electronic Technology, Guilin, China
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, China
- Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin, China
- Guangxi Engineering Technology Research Center of Human Physiological Information Noninvasive Detection, Guilin, China
| |
Collapse
|
13
|
Zhang C, He W, Liu L, Dai J, Salim Ahmad I, Xie Y, Liang X. Volumetric feature points integration with bio-structure-informed guidance for deformable multi-modal CT image registration. Phys Med Biol 2023; 68:245007. [PMID: 37844603 DOI: 10.1088/1361-6560/ad03d2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Accepted: 10/16/2023] [Indexed: 10/18/2023]
Abstract
Objective.Medical image registration represents a fundamental challenge in medical image processing. Specifically, CT-CBCT registration has significant implications in the context of image-guided radiation therapy (IGRT). However, traditional iterative methods often require considerable computational time. Deep learning based methods, especially when dealing with low contrast organs, are frequently entangled in local optimal solutions.Approach.To address these limitations, we introduce a registration method based on volumetric feature points integration with bio-structure-informed guidance. Surface point cloud is generated from segmentation labels during the training stage, with both the surface-registered point pairs and voxel feature point pairs co-guiding the training process, thereby achieving higher registration accuracy.Main results.Our findings have been validated on paired CT-CBCT datasets. In comparison with other deep learning registration methods, our approach has improved the precision by 6%, reaching a state-of-the-art status.Significance.The integration of voxel feature points and bio-structure feature points to guide the training of the medical image registration network has achieved promising results. This provides a meaningful direction for further research in medical image registration and IGRT.
Collapse
Affiliation(s)
- Chulong Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055 Guangdong, People's Republic of China
| | - Wenfeng He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055 Guangdong, People's Republic of China
| | - Lin Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055 Guangdong, People's Republic of China
| | - Jingjing Dai
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055 Guangdong, People's Republic of China
| | - Isah Salim Ahmad
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055 Guangdong, People's Republic of China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055 Guangdong, People's Republic of China
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055 Guangdong, People's Republic of China
| |
Collapse
|
14
|
Ostmeier S, Axelrod B, Isensee F, Bertels J, Mlynash M, Christensen S, Lansberg MG, Albers GW, Sheth R, Verhaaren BFJ, Mahammedi A, Li LJ, Zaharchuk G, Heit JJ. USE-Evaluator: Performance metrics for medical image segmentation models supervised by uncertain, small or empty reference annotations in neuroimaging. Med Image Anal 2023; 90:102927. [PMID: 37672900 DOI: 10.1016/j.media.2023.102927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 07/08/2023] [Accepted: 08/03/2023] [Indexed: 09/08/2023]
Abstract
Performance metrics for medical image segmentation models are used to measure the agreement between the reference annotation and the predicted segmentation. Usually, overlap metrics, such as the Dice, are used as a metric to evaluate the performance of these models in order for results to be comparable. However, there is a mismatch between the distributions of cases and the difficulty level of segmentation tasks in public data sets compared to clinical practice. Common metrics used to assess performance fail to capture the impact of this mismatch, particularly when dealing with datasets in clinical settings that involve challenging segmentation tasks, pathologies with low signal, and reference annotations that are uncertain, small, or empty. Limitations of common metrics may result in ineffective machine learning research in designing and optimizing models. To effectively evaluate the clinical value of such models, it is essential to consider factors such as the uncertainty associated with reference annotations, the ability to accurately measure performance regardless of the size of the reference annotation volume, and the classification of cases where reference annotations are empty. We study how uncertain, small, and empty reference annotations influence the value of metrics on a stroke in-house data set regardless of the model. We examine metrics behavior on the predictions of a standard deep learning framework in order to identify suitable metrics in such a setting. We compare our results to the BRATS 2019 and Spinal Cord public data sets. We show how uncertain, small, or empty reference annotations require a rethinking of the evaluation. The evaluation code was released to encourage further analysis of this topic https://github.com/SophieOstmeier/UncertainSmallEmpty.git.
Collapse
Affiliation(s)
- Sophie Ostmeier
- Stanford University, Center of Academic Medicine, 453 Quarry Rd, Palo Alto, CA 94304, United States of America.
| | - Brian Axelrod
- Stanford University, Center of Academic Medicine, 453 Quarry Rd, Palo Alto, CA 94304, United States of America
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany
| | | | - Michael Mlynash
- Stanford University, Center of Academic Medicine, 453 Quarry Rd, Palo Alto, CA 94304, United States of America
| | | | - Maarten G Lansberg
- Stanford University, Center of Academic Medicine, 453 Quarry Rd, Palo Alto, CA 94304, United States of America
| | - Gregory W Albers
- Stanford University, Center of Academic Medicine, 453 Quarry Rd, Palo Alto, CA 94304, United States of America
| | | | | | - Abdelkader Mahammedi
- Stanford University, Center of Academic Medicine, 453 Quarry Rd, Palo Alto, CA 94304, United States of America
| | - Li-Jia Li
- Stanford University, Center of Academic Medicine, 453 Quarry Rd, Palo Alto, CA 94304, United States of America
| | - Greg Zaharchuk
- Stanford University, Center of Academic Medicine, 453 Quarry Rd, Palo Alto, CA 94304, United States of America
| | - Jeremy J Heit
- Stanford University, Center of Academic Medicine, 453 Quarry Rd, Palo Alto, CA 94304, United States of America
| |
Collapse
|
15
|
Horkaew P, Chansangrat J, Keeratibharat N, Le DC. Recent advances in computerized imaging and its vital roles in liver disease diagnosis, preoperative planning, and interventional liver surgery: A review. World J Gastrointest Surg 2023; 15:2382-2397. [PMID: 38111769 PMCID: PMC10725533 DOI: 10.4240/wjgs.v15.i11.2382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 08/30/2023] [Accepted: 09/27/2023] [Indexed: 11/26/2023] Open
Abstract
The earliest and most accurate detection of the pathological manifestations of hepatic diseases ensures effective treatments and thus positive prognostic outcomes. In clinical settings, screening and determining the extent of a pathology are prominent factors in preparing remedial agents and administering appropriate therapeutic procedures. Moreover, in a patient undergoing liver resection, a realistic preoperative simulation of the subject-specific anatomy and physiology also plays a vital part in conducting initial assessments, making surgical decisions during the procedure, and anticipating postoperative results. Conventionally, various medical imaging modalities, e.g., computed tomography, magnetic resonance imaging, and positron emission tomography, have been employed to assist in these tasks. In fact, several standardized procedures, such as lesion detection and liver segmentation, are also incorporated into prominent commercial software packages. Thus far, most integrated software as a medical device typically involves tedious interactions from the physician, such as manual delineation and empirical adjustments, as per a given patient. With the rapid progress in digital health approaches, especially medical image analysis, a wide range of computer algorithms have been proposed to facilitate those procedures. They include pattern recognition of a liver, its periphery, and lesion, as well as pre- and postoperative simulations. Prior to clinical adoption, however, software must conform to regulatory requirements set by the governing agency, for instance, valid clinical association and analytical and clinical validation. Therefore, this paper provides a detailed account and discussion of the state-of-the-art methods for liver image analyses, visualization, and simulation in the literature. Emphasis is placed upon their concepts, algorithmic classifications, merits, limitations, clinical considerations, and future research trends.
Collapse
Affiliation(s)
- Paramate Horkaew
- School of Computer Engineering, Suranaree University of Technology, Nakhon Ratchasima 30000, Thailand
| | - Jirapa Chansangrat
- School of Radiology, Institute of Medicine, Suranaree University of Technology, Nakhon Ratchasima 30000, Thailand
| | - Nattawut Keeratibharat
- School of Surgery, Institute of Medicine, Suranaree University of Technology, Nakhon Ratchasima 30000, Thailand
| | - Doan Cong Le
- Faculty of Information Technology, An Giang University, Vietnam National University (Ho Chi Minh City), An Giang 90000, Vietnam
| |
Collapse
|
16
|
Kim H, Wilton SB, Garcia J. Left atrium 4D-flow segmentation with high-resolution contrast-enhanced magnetic resonance angiography. Front Cardiovasc Med 2023; 10:1225922. [PMID: 37904808 PMCID: PMC10613494 DOI: 10.3389/fcvm.2023.1225922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 09/26/2023] [Indexed: 11/01/2023] Open
Abstract
Background Atrial fibrillation (AF) leads to intracardiac thrombus and an associated risk of stroke. Phase-contrast cardiovascular magnetic resonance (CMR) with flow-encoding in all three spatial directions (4D-flow) provides a time-resolved 3D volume image with 3D blood velocity, which brings individual hemodynamic information affecting thrombus formation. As the resolution and contrast of 4D-flow are limited, we proposed a semi-automated 4D-flow segmentation method for the left atrium (LA) using a standard-of-care contrast-enhanced magnetic resonance angiography (CE-MRA) and registration technique. Methods LA of 54 patients with AF were segmented from 4D-flow taken in sinus rhythm using two segmentation methods. (1) Phase-contrast magnetic resonance angiography (PC-MRA) was calculated from 4D-flow, and LA was segmented slice-by-slice manually. (2) LA and other structures were segmented from CE-MRA and transformed into 4D-flow coordinates by registration with the mutual information method. Overlap of volume was tested by the Dice similarity coefficient (DSC) and the average symmetric surface distance (ASSD). Mean velocity and stasis were calculated to compare the functional property of LA from two segmentation methods. Results LA volumes from segmentation on CE-MRA were strongly correlated with PC-MRA volume, although mean CE-MRA volumes were about 10% larger. The proposed registration scheme resulted in visually successful registration in 76% of cases after two rounds of registration. The mean of DSC of the registered cases was 0.770 ± 0.045, and the mean of ASSD was 2.704 mm ± 0.668 mm. Mean velocity had no significant difference between the two segmentation methods, and mean stasis had a 3.3% difference. Conclusion The proposed CE-MRA segmentation and registration method can generate segmentation for 4D-flow images. This method will facilitate 4D-flow analysis for AF patients by making segmentation easier and overcoming the limit of resolution.
Collapse
Affiliation(s)
- Hansuk Kim
- Biomedical Engineering, University of Calgary, Calgary, AB, Canada
- Stephenson Cardiac Imaging Centre, University of Calgary, Calgary, AB, Canada
- Libin Cardiovascular Institute, University of Calgary, Calgary, AB, Canada
| | - Stephen B. Wilton
- Libin Cardiovascular Institute, University of Calgary, Calgary, AB, Canada
- Department of Cardiac Sciences, University of Calgary, Calgary, AB, Canada
| | - Julio Garcia
- Stephenson Cardiac Imaging Centre, University of Calgary, Calgary, AB, Canada
- Libin Cardiovascular Institute, University of Calgary, Calgary, AB, Canada
- Department of Cardiac Sciences, University of Calgary, Calgary, AB, Canada
- Department of Radiology, University of Calgary, Calgary, AB, Canada
- Alberta Children’s Hospital Research Institute, University of Calgary, Calgary, AB, Canada
| |
Collapse
|
17
|
Meglič J, Sunoqrot MRS, Bathen TF, Elschot M. Label-set impact on deep learning-based prostate segmentation on MRI. Insights Imaging 2023; 14:157. [PMID: 37749333 PMCID: PMC10519913 DOI: 10.1186/s13244-023-01502-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 08/12/2023] [Indexed: 09/27/2023] Open
Abstract
BACKGROUND Prostate segmentation is an essential step in computer-aided detection and diagnosis systems for prostate cancer. Deep learning (DL)-based methods provide good performance for prostate gland and zones segmentation, but little is known about the impact of manual segmentation (that is, label) selection on their performance. In this work, we investigated these effects by obtaining two different expert label-sets for the PROSTATEx I challenge training dataset (n = 198) and using them, in addition to an in-house dataset (n = 233), to assess the effect on segmentation performance. The automatic segmentation method we used was nnU-Net. RESULTS The selection of training/testing label-set had a significant (p < 0.001) impact on model performance. Furthermore, it was found that model performance was significantly (p < 0.001) higher when the model was trained and tested with the same label-set. Moreover, the results showed that agreement between automatic segmentations was significantly (p < 0.0001) higher than agreement between manual segmentations and that the models were able to outperform the human label-sets used to train them. CONCLUSIONS We investigated the impact of label-set selection on the performance of a DL-based prostate segmentation model. We found that the use of different sets of manual prostate gland and zone segmentations has a measurable impact on model performance. Nevertheless, DL-based segmentation appeared to have a greater inter-reader agreement than manual segmentation. More thought should be given to the label-set, with a focus on multicenter manual segmentation and agreement on common procedures. CRITICAL RELEVANCE STATEMENT Label-set selection significantly impacts the performance of a deep learning-based prostate segmentation model. Models using different label-set showed higher agreement than manual segmentations. KEY POINTS • Label-set selection has a significant impact on the performance of automatic segmentation models. • Deep learning-based models demonstrated true learning rather than simply mimicking the label-set. • Automatic segmentation appears to have a greater inter-reader agreement than manual segmentation.
Collapse
Affiliation(s)
- Jakob Meglič
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology - NTNU, 7030, Trondheim, Norway.
- Faculty of Medicine, University of Ljubljana, 1000, Ljubljana, Slovenia.
| | - Mohammed R S Sunoqrot
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology - NTNU, 7030, Trondheim, Norway
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030, Trondheim, Norway
| | - Tone Frost Bathen
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology - NTNU, 7030, Trondheim, Norway
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030, Trondheim, Norway
| | - Mattijs Elschot
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology - NTNU, 7030, Trondheim, Norway.
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030, Trondheim, Norway.
| |
Collapse
|
18
|
Bianconi F, Salis R, Fravolini ML, Khan MU, Minestrini M, Filippi L, Marongiu A, Nuvoli S, Spanu A, Palumbo B. Performance Analysis of Six Semi-Automated Tumour Delineation Methods on [ 18F] Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography (FDG PET/CT) in Patients with Head and Neck Cancer. SENSORS (BASEL, SWITZERLAND) 2023; 23:7952. [PMID: 37766009 PMCID: PMC10537871 DOI: 10.3390/s23187952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 09/01/2023] [Accepted: 09/15/2023] [Indexed: 09/29/2023]
Abstract
Background. Head and neck cancer (HNC) is the seventh most common neoplastic disorder at the global level. Contouring HNC lesions on [18F] Fluorodeoxyglucose positron emission tomography/computed tomography (FDG PET/CT) scans plays a fundamental role for diagnosis, risk assessment, radiotherapy planning and post-treatment evaluation. However, manual contouring is a lengthy and tedious procedure which requires significant effort from the clinician. Methods. We evaluated the performance of six hand-crafted, training-free methods (four threshold-based, two algorithm-based) for the semi-automated delineation of HNC lesions on FDG PET/CT. This study was carried out on a single-centre population of n=103 subjects, and the standard of reference was manual segmentation generated by nuclear medicine specialists. Figures of merit were the Sørensen-Dice coefficient (DSC) and relative volume difference (RVD). Results. Median DSC ranged between 0.595 and 0.792, median RVD between -22.0% and 87.4%. Click and draw and Nestle's methods achieved the best segmentation accuracy (median DSC, respectively, 0.792 ± 0.178 and 0.762 ± 0.107; median RVD, respectively, -21.6% ± 1270.8% and -32.7% ± 40.0%) and outperformed the other methods by a significant margin. Nestle's method also resulted in a lower dispersion of the data, hence showing stronger inter-patient stability. The accuracy of the two best methods was in agreement with the most recent state-of-the art results. Conclusions. Semi-automated PET delineation methods show potential to assist clinicians in the segmentation of HNC lesions on FDG PET/CT images, although manual refinement may sometimes be needed to obtain clinically acceptable ROIs.
Collapse
Affiliation(s)
- Francesco Bianconi
- Department of Engineering, Università degli Studi di Perugia, Via Goffredo Duranti 93, 06125 Perugia, Italy; (M.L.F.); (M.U.K.)
| | - Roberto Salis
- Unit of Nuclear Medicine, Department of Medicine, Surgery and Pharmacy, University of Sassari, 07100 Sassari, Italy; (R.S.); (A.M.); (S.N.)
| | - Mario Luca Fravolini
- Department of Engineering, Università degli Studi di Perugia, Via Goffredo Duranti 93, 06125 Perugia, Italy; (M.L.F.); (M.U.K.)
| | - Muhammad Usama Khan
- Department of Engineering, Università degli Studi di Perugia, Via Goffredo Duranti 93, 06125 Perugia, Italy; (M.L.F.); (M.U.K.)
| | - Matteo Minestrini
- Section of Nuclear Medicine and Health Physics, Department of Medicine and Surgery, Università degli Studi di Perugia, Piazza Lucio Severi 1, 06132 Perugia, Italy; (M.M.); (B.P.)
| | - Luca Filippi
- Policlinico Tor Vergata Hospital, Viale Oxford 81, 00133 Rome, Italy;
| | - Andrea Marongiu
- Unit of Nuclear Medicine, Department of Medicine, Surgery and Pharmacy, University of Sassari, 07100 Sassari, Italy; (R.S.); (A.M.); (S.N.)
| | - Susanna Nuvoli
- Unit of Nuclear Medicine, Department of Medicine, Surgery and Pharmacy, University of Sassari, 07100 Sassari, Italy; (R.S.); (A.M.); (S.N.)
| | - Angela Spanu
- Unit of Nuclear Medicine, Department of Medicine, Surgery and Pharmacy, University of Sassari, 07100 Sassari, Italy; (R.S.); (A.M.); (S.N.)
| | - Barbara Palumbo
- Section of Nuclear Medicine and Health Physics, Department of Medicine and Surgery, Università degli Studi di Perugia, Piazza Lucio Severi 1, 06132 Perugia, Italy; (M.M.); (B.P.)
| |
Collapse
|
19
|
Constant C, Aubin CE, Kremers HM, Garcia DVV, Wyles CC, Rouzrokh P, Larson AN. The use of deep learning in medical imaging to improve spine care: A scoping review of current literature and clinical applications. NORTH AMERICAN SPINE SOCIETY JOURNAL 2023; 15:100236. [PMID: 37599816 PMCID: PMC10432249 DOI: 10.1016/j.xnsj.2023.100236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Accepted: 06/14/2023] [Indexed: 08/22/2023]
Abstract
Background Artificial intelligence is a revolutionary technology that promises to assist clinicians in improving patient care. In radiology, deep learning (DL) is widely used in clinical decision aids due to its ability to analyze complex patterns and images. It allows for rapid, enhanced data, and imaging analysis, from diagnosis to outcome prediction. The purpose of this study was to evaluate the current literature and clinical utilization of DL in spine imaging. Methods This study is a scoping review and utilized the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to review the scientific literature from 2012 to 2021. A search in PubMed, Web of Science, Embased, and IEEE Xplore databases with syntax specific for DL and medical imaging in spine care applications was conducted to collect all original publications on the subject. Specific data was extracted from the available literature, including algorithm application, algorithms tested, database type and size, algorithm training method, and outcome of interest. Results A total of 365 studies (total sample of 232,394 patients) were included and grouped into 4 general applications: diagnostic tools, clinical decision support tools, automated clinical/instrumentation assessment, and clinical outcome prediction. Notable disparities exist in the selected algorithms and the training across multiple disparate databases. The most frequently used algorithms were U-Net and ResNet. A DL model was developed and validated in 92% of included studies, while a pre-existing DL model was investigated in 8%. Of all developed models, only 15% of them have been externally validated. Conclusions Based on this scoping review, DL in spine imaging is used in a broad range of clinical applications, particularly for diagnosing spinal conditions. There is a wide variety of DL algorithms, database characteristics, and training methods. Future studies should focus on external validation of existing models before bringing them into clinical use.
Collapse
Affiliation(s)
- Caroline Constant
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Polytechnique Montreal, 2500 Chem. de Polytechnique, Montréal, QC H3T 1J4, Canada
- AO Research Institute Davos, Clavadelerstrasse 8, CH 7270, Davos, Switzerland
| | - Carl-Eric Aubin
- Polytechnique Montreal, 2500 Chem. de Polytechnique, Montréal, QC H3T 1J4, Canada
| | - Hilal Maradit Kremers
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
| | - Diana V. Vera Garcia
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
| | - Cody C. Wyles
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Department of Orthopedic Surgery, Mayo Clinic, 200, 1st St Southwest, Rochester, MN, 55902, United States
| | - Pouria Rouzrokh
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Radiology Informatics Laboratory, Mayo Clinic, 200, 1st St Southwest, Rochester, MN, 55902, United States
| | - Annalise Noelle Larson
- Orthopedic Surgery AI Laboratory, Mayo Clinic, 200 1st St Southwest, Rochester, MN, 55902, United States
- Department of Orthopedic Surgery, Mayo Clinic, 200, 1st St Southwest, Rochester, MN, 55902, United States
| |
Collapse
|
20
|
Doolan PJ, Charalambous S, Roussakis Y, Leczynski A, Peratikou M, Benjamin M, Ferentinos K, Strouthos I, Zamboglou C, Karagiannis E. A clinical evaluation of the performance of five commercial artificial intelligence contouring systems for radiotherapy. Front Oncol 2023; 13:1213068. [PMID: 37601695 PMCID: PMC10436522 DOI: 10.3389/fonc.2023.1213068] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 07/17/2023] [Indexed: 08/22/2023] Open
Abstract
Purpose/objectives Auto-segmentation with artificial intelligence (AI) offers an opportunity to reduce inter- and intra-observer variability in contouring, to improve the quality of contours, as well as to reduce the time taken to conduct this manual task. In this work we benchmark the AI auto-segmentation contours produced by five commercial vendors against a common dataset. Methods and materials The organ at risk (OAR) contours generated by five commercial AI auto-segmentation solutions (Mirada (Mir), MVision (MV), Radformation (Rad), RayStation (Ray) and TheraPanacea (Ther)) were compared to manually-drawn expert contours from 20 breast, 20 head and neck, 20 lung and 20 prostate patients. Comparisons were made using geometric similarity metrics including volumetric and surface Dice similarity coefficient (vDSC and sDSC), Hausdorff distance (HD) and Added Path Length (APL). To assess the time saved, the time taken to manually draw the expert contours, as well as the time to correct the AI contours, were recorded. Results There are differences in the number of CT contours offered by each AI auto-segmentation solution at the time of the study (Mir 99; MV 143; Rad 83; Ray 67; Ther 86), with all offering contours of some lymph node levels as well as OARs. Averaged across all structures, the median vDSCs were good for all systems and compared favorably with existing literature: Mir 0.82; MV 0.88; Rad 0.86; Ray 0.87; Ther 0.88. All systems offer substantial time savings, ranging between: breast 14-20 mins; head and neck 74-93 mins; lung 20-26 mins; prostate 35-42 mins. The time saved, averaged across all structures, was similar for all systems: Mir 39.8 mins; MV 43.6 mins; Rad 36.6 min; Ray 43.2 mins; Ther 45.2 mins. Conclusions All five commercial AI auto-segmentation solutions evaluated in this work offer high quality contours in significantly reduced time compared to manual contouring, and could be used to render the radiotherapy workflow more efficient and standardized.
Collapse
Affiliation(s)
- Paul J. Doolan
- Department of Medical Physics, German Oncology Center, Limassol, Cyprus
| | | | - Yiannis Roussakis
- Department of Medical Physics, German Oncology Center, Limassol, Cyprus
| | - Agnes Leczynski
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
| | - Mary Peratikou
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
| | - Melka Benjamin
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
| | - Konstantinos Ferentinos
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
- School of Medicine, European University Cyprus, Nicosia, Cyprus
| | - Iosif Strouthos
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
- School of Medicine, European University Cyprus, Nicosia, Cyprus
| | - Constantinos Zamboglou
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
- School of Medicine, European University Cyprus, Nicosia, Cyprus
- Department of Radiation Oncology, Medical Center – University of Freiberg, Freiberg, Germany
| | - Efstratios Karagiannis
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
- School of Medicine, European University Cyprus, Nicosia, Cyprus
| |
Collapse
|
21
|
Lay N, Anari PY, Chaurasia A, Firouzabadi FD, Harmon S, Turkbey E, Gautam R, Samimi S, Merino MJ, Ball MW, Linehan WM, Turkbey B, Malayeri AA. Deep learning-based decision forest for hereditary clear cell renal cell carcinoma segmentation on MRI. Med Phys 2023; 50:5020-5029. [PMID: 36855860 PMCID: PMC10683486 DOI: 10.1002/mp.16303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 12/15/2022] [Accepted: 01/04/2023] [Indexed: 03/02/2023] Open
Abstract
BACKGROUND von Hippel-Lindau syndrome (VHL) is an autosomal dominant hereditary syndrome with an increased predisposition of developing numerous cysts and tumors, almost exclusively clear cell renal cell carcinoma (ccRCC). Considering the lifelong surveillance in such patients to monitor the disease, patients with VHL are preferentially imaged using MRI to eliminate radiation exposure. PURPOSE Segmentation of kidney and tumor structures on MRI in VHL patients is useful in lesion characterization (e.g., cyst vs. tumor), volumetric lesion analysis, and tumor growth prediction. However, automated tasks such as ccRCC segmentation on MRI is sparsely studied. We develop segmentation methodology for ccRCC on T1 weighted precontrast, corticomedullary, nephrogenic, and excretory contrast phase MRI. METHODS We applied a new neural network approache using a novel differentiable decision forest, called hinge forest (HF), to segment kidney parenchyma, cyst, and ccRCC tumors in 117 images from 115 patients. This data set represented an unprecedented 504 ccRCCs with 1171 cystic lesions obtained at five different MRI scanners. The HF architecture was compared with U-Net on 10 randomized splits with 75% used for training and 25% used for testing. Both methods were trained with Adam using default parameters (α = 0.001 , β 1 = 0.9 , β 2 = 0.999 $\alpha = 0.001,\ \beta _1 = 0.9,\ \beta _2 = 0.999$ ) over 1000 epochs. We further demonstrated some interpretability of our HF method by exploiting decision tree structure. RESULTS The HF achieved an average kidney, cyst, and tumor Dice similarity coefficient (DSC) of 0.75 ± 0.03, 0.44 ± 0.05, 0.53 ± 0.04, respectively, while U-Net achieved an average kidney, cyst, and tumor DSC of 0.78 ± 0.02, 0.41 ± 0.04, 0.46 ± 0.05, respectively. The HF significantly outperformed U-Net on tumors while U-Net significantly outperformed HF when segmenting kidney parenchymas (α < 0.01 $\alpha < 0.01$ ). CONCLUSIONS For the task of ccRCC segmentation, the HF can offer better segmentation performance compared to the traditional U-Net architecture. The leaf maps can glean hints about deep learning features that might prove to be useful in other automated tasks such as tumor characterization.
Collapse
Affiliation(s)
- Nathan Lay
- Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, Bethesda, Maryland, USA
| | - Pouria Yazdian Anari
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA
| | - Aditi Chaurasia
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA
| | | | - Stephanie Harmon
- Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, Bethesda, Maryland, USA
| | - Evrim Turkbey
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA
| | - Rabindra Gautam
- Urologic Oncology Branch, Center for Cancer Research, National Cancer Institute, Bethesda, Maryland, USA
| | - Safa Samimi
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA
| | - Maria J. Merino
- Laboratory of Pathology, Center for Cancer Research, National Cancer Institute, Bethesda, Maryland, USA
| | - Mark W. Ball
- Urologic Oncology Branch, Center for Cancer Research, National Cancer Institute, Bethesda, Maryland, USA
| | - William Marston Linehan
- Urologic Oncology Branch, Center for Cancer Research, National Cancer Institute, Bethesda, Maryland, USA
| | - Baris Turkbey
- Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, Bethesda, Maryland, USA
| | - Ashkan A. Malayeri
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA
| |
Collapse
|
22
|
Zhou M, Xu Z, Tong RKY. Superpixel-guided class-level denoising for unsupervised domain adaptive fundus image segmentation without source data. Comput Biol Med 2023; 162:107061. [PMID: 37263152 DOI: 10.1016/j.compbiomed.2023.107061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 05/11/2023] [Accepted: 05/20/2023] [Indexed: 06/03/2023]
Abstract
Unsupervised domain adaptation (UDA), which is used to alleviate the domain shift between the source domain and target domain, has attracted substantial research interest. Previous studies have proposed effective UDA methods which require both labeled source data and unlabeled target data to achieve desirable distribution alignment. However, due to privacy concerns, the vendor side often can only trade the pretrained source model without providing the source data to the targeted client, leading to failed adaptation by classical UDA techniques. To address this issue, in this paper, a novel Superpixel-guided Class-level Denoised self-training framework (SCD) is proposed, aiming at effectively adapting the pretrained source model to the target domain in the absence of source data. Since the source data is unavailable, the model can only be trained on the target domain with the pseudo labels obtained from the pretrained source model. However, due to domain shift, the predictions obtained by the source model on the target domain are noisy. Considering this, we propose three mutual-reinforcing components tailored to our self-training framework: (i) an adaptive class-aware thresholding strategy for more balanced pseudo label generation, (ii) a masked superpixel-guided clustering method for generating multiple content-adaptive and spatial-adaptive feature centroids that enhance the discriminability of final prototypes for effective prototypical label denoising, and (iii) adaptive learning schemes for suspected noisy-labeled and correct-labeled pixels to effectively utilize the valuable information available. Comprehensive experiments on multi-site fundus image segmentation demonstrate the superior performance of our approach and the effectiveness of each component.
Collapse
Affiliation(s)
- Meng Zhou
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong, China
| | - Zhe Xu
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong, China.
| | - Raymond Kai-Yu Tong
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong, China.
| |
Collapse
|
23
|
Huang Y, Jiao J, Yu J, Zheng Y, Wang Y. RsALUNet: A reinforcement supervision U-Net-based framework for multi-ROI segmentation of medical images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2023]
|
24
|
Marzola A, McGreevy KS, Mussa F, Volpe Y, Governi L. HyM3D: A hybrid method for the automatic 3D reconstruction of a defective cranial vault. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 234:107516. [PMID: 37023601 DOI: 10.1016/j.cmpb.2023.107516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 03/08/2023] [Accepted: 03/27/2023] [Indexed: 06/19/2023]
Abstract
BACKGROUND AND OBJECTIVE The ability to accomplish a consistent restoration of a missing or deformed anatomical area is a fundamental step for defining a custom implant, especially in the maxillofacial and cranial reconstruction where the aesthetical aspect is crucial for a successful surgical outcome. At the same time, this task is also the most difficult, time-consuming, and complicated across the whole reconstruction process. This is mostly due to the high geometric complexity of the anatomical structures, insufficient references, and significant interindividual anatomical heterogeneity. Numerous solutions, specifically for the neurocranium, have been put forward in the scientific literature to address the reconstruction issue, but none of them has yet been persuasive enough to guarantee an easily automatable approach with a consistent shape reconstruction. METHODS This work aims to present a novel reconstruction method (named HyM3D) for the automatic restoration of the exocranial surface by ensuring both the symmetry of the resulting skull and the continuity between the reconstructive patch and the surrounding bone. To achieve this goal, the strengths of the Template-based methods are exploited to provide knowledge of the missing or deformed region and to guide a subsequent Surface Interpolation-based algorithm. HyM3D is an improved version of a methodology presented by the authors in a previous publication for the restoration of unilateral defects. Differently from the first version, the novel procedure applies to all kinds of cranial defects, whether they are unilateral or not. RESULTS The presented method has been tested on several test cases, both synthetic and real, and the results show that it is reliable and trustworthy, providing a consistent outcome with no user intervention even when dealing with complex defects. CONCLUSIONS HyM3D method proved to be a valid alternative to the existing approaches for the digital reconstruction of a defective cranial vault; furthermore, with respect to the current alternatives, it demands less user interaction since the method is landmarks-independent and does not require any patch adaptation.
Collapse
Affiliation(s)
- Antonio Marzola
- Department of Industrial Engineering of Florence, University of Florence (Italy), via di Santa Marta 3, Firenze 50139, Italy.
| | | | - Federico Mussa
- Meyer Children's Hospital IRCCS, Viale Pieraccini 24, Florence 50141, Italy
| | - Yary Volpe
- Department of Industrial Engineering of Florence, University of Florence (Italy), via di Santa Marta 3, Firenze 50139, Italy
| | - Lapo Governi
- Department of Industrial Engineering of Florence, University of Florence (Italy), via di Santa Marta 3, Firenze 50139, Italy
| |
Collapse
|
25
|
Luu MH, Mai HS, Pham XL, Le QA, Le QK, Walsum TV, Le NH, Franklin D, Le VH, Moelker A, Chu DT, Trung NL. Quantification of liver-Lung shunt fraction on 3D SPECT/CT images for selective internal radiation therapy of liver cancer using CNN-based segmentations and non-rigid registration. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 233:107453. [PMID: 36921463 DOI: 10.1016/j.cmpb.2023.107453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 01/25/2023] [Accepted: 02/27/2023] [Indexed: 06/18/2023]
Abstract
PURPOSE Selective internal radiation therapy (SIRT) has been proven to be an effective treatment for hepatocellular carcinoma (HCC) patients. In clinical practice, the treatment planning for SIRT using 90Y microspheres requires estimation of the liver-lung shunt fraction (LSF) to avoid radiation pneumonitis. Currently, the manual segmentation method to draw a region of interest (ROI) of the liver and lung in 2D planar imaging of 99mTc-MAA and 3D SPECT/CT images is inconvenient, time-consuming and observer-dependent. In this study, we propose and evaluate a nearly automatic method for LSF quantification using 3D SPECT/CT images, offering improved performance compared with the current manual segmentation method. METHODS We retrospectively acquired 3D SPECT with non-contrast-enhanced CT images (nCECT) of 60 HCC patients from a SPECT/CT scanning machine, along with the corresponding diagnostic contrast-enhanced CT images (CECT). Our approach for LSF quantification is to use CNN-based methods for liver and lung segmentations in the nCECT image. We first apply 3D ResUnet to coarsely segment the liver. If the liver segmentation contains a large error, we dilate the coarse liver segmentation into the liver mask as a ROI in the nCECT image. Subsequently, non-rigid registration is applied to deform the liver in the CECT image to fit that obtained in the nCECT image. The final liver segmentation is obtained by segmenting the liver in the deformed CECT image using nnU-Net. In addition, the lung segmentations are obtained using 2D ResUnet. Finally, LSF quantitation is performed based on the number of counts in the SPECT image inside the segmentations. Evaluations and Results: To evaluate the liver segmentation accuracy, we used Dice similarity coefficient (DSC), asymmetric surface distance (ASSD), and max surface distance (MSD) and compared the proposed method to five well-known CNN-based methods for liver segmentation. Furthermore, the LSF error obtained by the proposed method was compared to a state-of-the-art method, modified Deepmedic, and the LSF quantifications obtained by manual segmentation. The results show that the proposed method achieved a DSC score for the liver segmentation that is comparable to other state-of-the-art methods, with an average of 0.93, and the highest consistency in segmentation accuracy, yielding a standard deviation of the DSC score of 0.01. The proposed method also obtains the lowest ASSD and MSD scores on average (2.6 mm and 31.5 mm, respectively). Moreover, for the proposed method, a median LSF error of 0.14% is obtained, which is a statically significant improvement to the state-of-the-art-method (p=0.004), and is much smaller than the median error in LSF manual determination by the medical experts using 2D planar image (1.74% and p<0.001). CONCLUSIONS A method for LSF quantification using 3D SPECT/CT images based on CNNs and non-rigid registration was proposed, evaluated and compared to state-of-the-art techniques. The proposed method can quantitatively determine the LSF with high accuracy and has the potential to be applied in clinical practice.
Collapse
Affiliation(s)
- Manh Ha Luu
- AVITECH, VNU University of Engineering and Technology, Hanoi, Vietnam; FET, VNU University of Engineering and Technology, Hanoi, Vietnam; Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands.
| | - Hong Son Mai
- Department of Nuclear Medicine, Hospital 108, Hanoi, Vietnam
| | - Xuan Loc Pham
- FET, VNU University of Engineering and Technology, Hanoi, Vietnam
| | - Quoc Anh Le
- AVITECH, VNU University of Engineering and Technology, Hanoi, Vietnam
| | - Quoc Khanh Le
- Department of Nuclear Medicine, Hospital 108, Hanoi, Vietnam
| | - Theo van Walsum
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands
| | - Ngoc Ha Le
- Department of Nuclear Medicine, Hospital 108, Hanoi, Vietnam
| | - Daniel Franklin
- School of Electrical and Data Engineering, University of Technology Sydney, Sydney, Australia
| | - Vu Ha Le
- AVITECH, VNU University of Engineering and Technology, Hanoi, Vietnam; FET, VNU University of Engineering and Technology, Hanoi, Vietnam
| | - Adriaan Moelker
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands
| | - Duc Trinh Chu
- FET, VNU University of Engineering and Technology, Hanoi, Vietnam
| | - Nguyen Linh Trung
- AVITECH, VNU University of Engineering and Technology, Hanoi, Vietnam
| |
Collapse
|
26
|
Wu F, Zhuang X. Minimizing Estimated Risks on Unlabeled Data: A New Formulation for Semi-Supervised Medical Image Segmentation. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:6021-6036. [PMID: 36251907 DOI: 10.1109/tpami.2022.3215186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Supervised segmentation can be costly, particularly in applications of biomedical image analysis where large scale manual annotations from experts are generally too expensive to be available. Semi-supervised segmentation, able to learn from both the labeled and unlabeled images, could be an efficient and effective alternative for such scenarios. In this work, we propose a new formulation based on risk minimization, which makes full use of the unlabeled images. Different from most of the existing approaches which solely explicitly guarantee the minimization of prediction risks from the labeled training images, the new formulation also considers the risks on unlabeled images. Particularly, this is achieved via an unbiased estimator, based on which we develop a general framework for semi-supervised image segmentation. We validate this framework on three medical image segmentation tasks, namely cardiac segmentation on ACDC2017, optic cup and disc segmentation on REFUGE dataset and 3D whole heart segmentation on MM-WHS dataset. Results show that the proposed estimator is effective, and the segmentation method achieves superior performance and demonstrates great potential compared to the other state-of-the-art approaches. Our code and data will be released via https://zmiclab.github.io/projects.html, once the manuscript is accepted for publication.
Collapse
|
27
|
Cai G, Liu H, Zou W, Hu N, Wang J. Registration of 3D medical images based on unsupervised cooperative cascade of deep networks. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
28
|
Karri M, Annavarapu CSR, Acharya UR. Skin lesion segmentation using two-phase cross-domain transfer learning framework. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107408. [PMID: 36805279 DOI: 10.1016/j.cmpb.2023.107408] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/25/2022] [Revised: 01/31/2023] [Accepted: 02/04/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning (DL) models have been used for medical imaging for a long time but they did not achieve their full potential in the past because of insufficient computing power and scarcity of training data. In recent years, we have seen substantial growth in DL networks because of improved technology and an abundance of data. However, previous studies indicate that even a well-trained DL algorithm may struggle to generalize data from multiple sources because of domain shifts. Additionally, ineffectiveness of basic data fusion methods, complexity of segmentation target and low interpretability of current DL models limit their use in clinical decisions. To meet these challenges, we present a new two-phase cross-domain transfer learning system for effective skin lesion segmentation from dermoscopic images. METHODS Our system is based on two significant technical inventions. We examine a two- phase cross-domain transfer learning approach, including model-level and data-level transfer learning, by fine-tuning the system on two datasets, MoleMap and ImageNet. We then present nSknRSUNet, a high-performing DL network, for skin lesion segmentation using broad receptive fields and spatial edge attention feature fusion. We examine the trained model's generalization capabilities on skin lesion segmentation to quantify these two inventions. We cross-examine the model using two skin lesion image datasets, MoleMap and HAM10000, obtained from varied clinical contexts. RESULTS At data-level transfer learning for the HAM10000 dataset, the proposed model obtained 94.63% of DSC and 99.12% accuracy. In cross-examination at data-level transfer learning for the Molemap dataset, the proposed model obtained 93.63% of DSC and 97.01% of accuracy. CONCLUSION Numerous experiments reveal that our system produces excellent performance and improves upon state-of-the-art methods on both qualitative and quantitative measures.
Collapse
Affiliation(s)
- Meghana Karri
- Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines), Dhanbad, 826004, Jharkhand, India.
| | - Chandra Sekhara Rao Annavarapu
- Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines), Dhanbad, 826004, Jharkhand, India.
| | - U Rajendra Acharya
- Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, 599489, Singapore; Department of Biomedical Engineering, School of science and Technology, SUSS university, Singapore; Department of Biomedical Informatics and Medical Engineering, Asia university, Taichung, Taiwan.
| |
Collapse
|
29
|
Karimi D, Gholipour A. Improving Calibration and Out-of-Distribution Detection in Deep Models for Medical Image Segmentation. IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE 2023; 4:383-397. [PMID: 37868336 PMCID: PMC10586223 DOI: 10.1109/tai.2022.3159510] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2023]
Abstract
Convolutional Neural Networks (CNNs) have proved to be powerful medical image segmentation models. In this study, we address some of the main unresolved issues regarding these models. Specifically, training of these models on small medical image datasets is still challenging, with many studies promoting techniques such as transfer learning. Moreover, these models are infamous for producing over-confident predictions and for failing silently when presented with out-of-distribution (OOD) test data. In this paper, for improving prediction calibration we advocate for multi-task learning, i.e., training a single model on several different datasets, spanning different organs of interest and different imaging modalities. We show that multi-task learning can significantly improve model confidence calibration. For OOD detection, we propose a novel method based on spectral analysis of CNN feature maps. We show that different datasets, representing different imaging modalities and/or different organs of interest, have distinct spectral signatures, which can be used to identify whether or not a test image is similar to the images used for training. We show that our proposed method is more accurate than several competing methods, including methods based on prediction uncertainty and image classification.
Collapse
Affiliation(s)
- Davood Karimi
- Department of Radiology, Boston Children's Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | - Ali Gholipour
- Department of Radiology, Boston Children's Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
30
|
Huang C, Ying S, Huang M, Qiu C, Lu F, Peng Z, Kong D. Three-Dimensional Voxel-Wise Quantitative Assessment of Imaging Features in Hepatocellular Carcinoma. Diagnostics (Basel) 2023; 13:diagnostics13061170. [PMID: 36980478 PMCID: PMC10047821 DOI: 10.3390/diagnostics13061170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 03/03/2023] [Accepted: 03/10/2023] [Indexed: 03/30/2023] Open
Abstract
Voxel-wise quantitative assessment of typical characteristics in three-dimensional (3D) multiphase computed tomography (CT) imaging, especially arterial phase hyperenhancement (APHE) and subsequent washout (WO), is crucial for the diagnosis and therapy of hepatocellular carcinoma (HCC). However, this process is still missing in practice. Radiologists often visually estimate these features, which limit the diagnostic accuracy due to subjective interpretation and qualitative assessment. Quantitative assessment is one of the solutions to this problem. However, performing voxel-wise assessment in 3D is difficult due to the misalignments between images caused by respiratory and other physiological motions. In this paper, based on the Liver Imaging Reporting and Data System (v2018), we propose a registration-based quantitative model for the 3D voxel-wise assessment of image characteristics through multiple CT imaging phases. Specifically, we selected three phases from sequential CT imaging phases, i.e., pre-contrast phase (Pre), arterial phase (AP), delayed phase (DP), and then registered Pre and DP images to the AP image to extract and assess the major imaging characteristics. An iterative reweighted local cross-correlation was applied in the proposed registration model to construct the fidelity term for comparison of intensity features across different imaging phases, which is challenging due to their distinct intensity appearance. Experiments on clinical dataset showed that the means of dice similarity coefficient of liver were 98.6% and 98.1%, those of surface distance were 0.38 and 0.54 mm, and those of Hausdorff distance were 4.34 and 6.16 mm, indicating that quantitative estimation can be accomplished with high accuracy. For the classification of APHE, the result obtained by our method was consistent with those acquired by experts. For the WO, the effectiveness of the model was verified in terms of WO volume ratio.
Collapse
Affiliation(s)
- Chongfei Huang
- School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| | - Shihong Ying
- Department of Radiology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou 310030, China
| | - Meixiang Huang
- The School of Mathematics and Statistics, Minnan Normal University, Zhangzhou 363000, China
| | - Chenhui Qiu
- School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| | - Fang Lu
- Department of Mathematics, Zhejiang University of Science and Technology, Hangzhou 310023, China
| | - Zhiyi Peng
- Department of Radiology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou 310030, China
| | - Dexing Kong
- School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
31
|
Laparoscopic Microwave Ablation: Which Technologies Improve the Results. Cancers (Basel) 2023; 15:cancers15061814. [PMID: 36980701 PMCID: PMC10046461 DOI: 10.3390/cancers15061814] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 03/13/2023] [Accepted: 03/14/2023] [Indexed: 03/19/2023] Open
Abstract
Liver resection is the best treatment for hepatocellular carcinoma (HCC) when resectable. Unfortunately, many patients with HCC cannot undergo liver resection. Percutaneous thermoablation represents a valid alternative for inoperable neoplasms and for small HCCs, but it is not always possible to accomplish it. In cases where the percutaneous approach is not feasible (not a visible lesion or in hazardous locations), laparoscopic thermoablation may be indicated. HCC diagnosis is commonly obtained from imaging modalities, such as CT and MRI, However, the interpretation of radiological images, which have a two-dimensional appearance, during the surgical procedure and in particular during laparoscopy, can be very difficult in many cases for the surgeon who has to treat the tumor in a three-dimensional environment. In recent years, more technologies have helped surgeons to improve the results after ablative treatments. The three-dimensional reconstruction of the radiological images has allowed the surgeon to assess the exact position of the tumor both before the surgery (virtual reality) and during the surgery with immersive techniques (augmented reality). Furthermore, indocyanine green (ICG) fluorescence imaging seems to be a valid tool to enhance the precision of laparoscopic thermoablation. Finally, the association with laparoscopic ultrasound with contrast media could improve the localization and characteristics of tumor lesions. This article describes the use of hepatic three-dimensional modeling, ICG fluorescence imaging and laparoscopic ultrasound examination, convenient for improving the preoperative surgical preparation for personalized laparoscopic approach.
Collapse
|
32
|
Park S, Kim JH, Kim J, Joseph W, Lee D, Park SJ. Development of a deep learning-based auto-segmentation algorithm for hepatocellular carcinoma (HCC) and application to predict microvascular invasion of HCC using CT texture analysis: preliminary results. Acta Radiol 2023; 64:907-917. [PMID: 35570797 DOI: 10.1177/02841851221100318] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Automatic segmentation has recently been developed to yield objective data. Prediction of microvascular invasion (MVI) of hepatocellular carcinoma (HCC) using radiomics has been reported. PURPOSE To develop a deep learning-based auto-segmentation algorithm (DL-AS) for the detection of HCC and to predict MVI using computed tomography (CT) texture analysis. MATERIAL AND METHODS We retrospectively collected training data from 249 patients with HCC and validation set from 35 patients. Lesions of the training set were manually drawn by radiologist, in the delayed phase. 2D U-Net was selected as the DL architecture. Using the validation set, one radiologist manually drew 2D and 3D regions of interest twice, and the developed DL-AS was performed twice with a one-month time interval. The reproducibility was calculated using intraclass correlation coefficients (ICC). Logistic regression was performed to predict MVI. RESULTS ICC was in the range of 0.190-0.998/0.341-0.997 in the manual 3D/2D segmentation. In contrast, it was perfect in 3D/2D using DL-AS, with a success rate of 88.6% for the detection of HCC. For predicting MVI, sphericity was a significant parameter (odds ratio <0.001; 95% confidence interval <0.001-0.206; P = 0.020) for predicting MVI using 2D DL-AS. However, 3D DL-AS segmentation did not yield a predictive parameter. CONCLUSION The auto-segmentation of HCC using DL-AS provides perfect reproducibility, although it failed to detect 11.4% (4/35). However, the extracted parameters yielded different important predictors of MVI in HCC. Sphericity was a significant predictor in 2D DL-AS and 3D manual segmentation, while discrete compactness was a significant predictor in 2D manual segmentation.
Collapse
Affiliation(s)
- Sungeun Park
- Department of Radiology, 119754Konkuk University Medical Center, Seoul, Republic of Korea
| | - Jung Hoon Kim
- Department of Radiology, 58927Seoul National University Hospital, Seoul, Republic of Korea
- Department of Radiology, 37990Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Jieun Kim
- Department of Radiology, 58927Seoul National University Hospital, Seoul, Republic of Korea
| | | | - Doohee Lee
- Medical IP Co., Ltd, Seoul, Republic of Korea
| | | |
Collapse
|
33
|
Chen J, Chen S, Wee L, Dekker A, Bermejo I. Deep learning based unpaired image-to-image translation applications for medical physics: a systematic review. Phys Med Biol 2023; 68. [PMID: 36753766 DOI: 10.1088/1361-6560/acba74] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 02/08/2023] [Indexed: 02/10/2023]
Abstract
Purpose. There is a growing number of publications on the application of unpaired image-to-image (I2I) translation in medical imaging. However, a systematic review covering the current state of this topic for medical physicists is lacking. The aim of this article is to provide a comprehensive review of current challenges and opportunities for medical physicists and engineers to apply I2I translation in practice.Methods and materials. The PubMed electronic database was searched using terms referring to unpaired (unsupervised), I2I translation, and medical imaging. This review has been reported in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. From each full-text article, we extracted information extracted regarding technical and clinical applications of methods, Transparent Reporting for Individual Prognosis Or Diagnosis (TRIPOD) study type, performance of algorithm and accessibility of source code and pre-trained models.Results. Among 461 unique records, 55 full-text articles were included in the review. The major technical applications described in the selected literature are segmentation (26 studies), unpaired domain adaptation (18 studies), and denoising (8 studies). In terms of clinical applications, unpaired I2I translation has been used for automatic contouring of regions of interest in MRI, CT, x-ray and ultrasound images, fast MRI or low dose CT imaging, CT or MRI only based radiotherapy planning, etc Only 5 studies validated their models using an independent test set and none were externally validated by independent researchers. Finally, 12 articles published their source code and only one study published their pre-trained models.Conclusion. I2I translation of medical images offers a range of valuable applications for medical physicists. However, the scarcity of external validation studies of I2I models and the shortage of publicly available pre-trained models limits the immediate applicability of the proposed methods in practice.
Collapse
Affiliation(s)
- Junhua Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Shenlun Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Leonard Wee
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Inigo Bermejo
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| |
Collapse
|
34
|
Penarrubia L, Verstraete A, Orkisz M, Davila E, Boussel L, Yonis H, Mezidi M, Dhelft F, Danjou W, Bazzani A, Sigaud F, Bayat S, Terzi N, Girard M, Bitker L, Roux E, Richard JC. Precision of CT-derived alveolar recruitment assessed by human observers and a machine learning algorithm in moderate and severe ARDS. Intensive Care Med Exp 2023; 11:8. [PMID: 36797424 PMCID: PMC9934943 DOI: 10.1186/s40635-023-00495-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 01/24/2023] [Indexed: 02/18/2023] Open
Abstract
BACKGROUND Assessing measurement error in alveolar recruitment on computed tomography (CT) is of paramount importance to select a reliable threshold identifying patients with high potential for alveolar recruitment and to rationalize positive end-expiratory pressure (PEEP) setting in acute respiratory distress syndrome (ARDS). The aim of this study was to assess both intra- and inter-observer smallest real difference (SRD) exceeding measurement error of recruitment using both human and machine learning-made lung segmentation (i.e., delineation) on CT. This single-center observational study was performed on adult ARDS patients. CT were acquired at end-expiration and end-inspiration at the PEEP level selected by clinicians, and at end-expiration at PEEP 5 and 15 cmH2O. Two human observers and a machine learning algorithm performed lung segmentation. Recruitment was computed as the weight change of the non-aerated compartment on CT between PEEP 5 and 15 cmH2O. RESULTS Thirteen patients were included, of whom 11 (85%) presented a severe ARDS. Intra- and inter-observer measurements of recruitment were virtually unbiased, with 95% confidence intervals (CI95%) encompassing zero. The intra-observer SRD of recruitment amounted to 3.5 [CI95% 2.4-5.2]% of lung weight. The human-human inter-observer SRD of recruitment was slightly higher amounting to 5.7 [CI95% 4.0-8.0]% of lung weight, as was the human-machine SRD (5.9 [CI95% 4.3-7.8]% of lung weight). Regarding other CT measurements, both intra-observer and inter-observer SRD were close to zero for the CT-measurements focusing on aerated lung (end-expiratory lung volume, hyperinflation), and higher for the CT-measurements relying on accurate segmentation of the non-aerated lung (lung weight, tidal recruitment…). The average symmetric surface distance between lung segmentation masks was significatively lower in intra-observer comparisons (0.8 mm [interquartile range (IQR) 0.6-0.9]) as compared to human-human (1.0 mm [IQR 0.8-1.3] and human-machine inter-observer comparisons (1.1 mm [IQR 0.9-1.3]). CONCLUSIONS The SRD exceeding intra-observer experimental error in the measurement of alveolar recruitment may be conservatively set to 5% (i.e., the upper value of the CI95%). Human-machine and human-human inter-observer measurement errors with CT are of similar magnitude, suggesting that machine learning segmentation algorithms are credible alternative to humans for quantifying alveolar recruitment on CT.
Collapse
Affiliation(s)
- Ludmilla Penarrubia
- grid.7849.20000 0001 2150 7757Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, INSERM, CREATIS UMR 5220, U1294, Université de Lyon, Villeurbanne, France
| | - Aude Verstraete
- grid.413852.90000 0001 2163 3825Service de Médecine Intensive Réanimation, Hôpital de la Croix Rousse, Hospices Civils de Lyon, 103 Grande Rue de La Croix Rousse, 69004 Lyon, France
| | - Maciej Orkisz
- grid.7849.20000 0001 2150 7757Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, INSERM, CREATIS UMR 5220, U1294, Université de Lyon, Villeurbanne, France
| | - Eduardo Davila
- grid.7849.20000 0001 2150 7757Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, INSERM, CREATIS UMR 5220, U1294, Université de Lyon, Villeurbanne, France
| | - Loic Boussel
- grid.7849.20000 0001 2150 7757Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, INSERM, CREATIS UMR 5220, U1294, Université de Lyon, Villeurbanne, France ,grid.413852.90000 0001 2163 3825Service de Radiologie, Hôpital De La Croix Rousse, Hospices Civils de Lyon, Lyon, France
| | - Hodane Yonis
- grid.413852.90000 0001 2163 3825Service de Médecine Intensive Réanimation, Hôpital de la Croix Rousse, Hospices Civils de Lyon, 103 Grande Rue de La Croix Rousse, 69004 Lyon, France
| | - Mehdi Mezidi
- grid.413852.90000 0001 2163 3825Service de Médecine Intensive Réanimation, Hôpital de la Croix Rousse, Hospices Civils de Lyon, 103 Grande Rue de La Croix Rousse, 69004 Lyon, France
| | - Francois Dhelft
- grid.413852.90000 0001 2163 3825Service de Médecine Intensive Réanimation, Hôpital de la Croix Rousse, Hospices Civils de Lyon, 103 Grande Rue de La Croix Rousse, 69004 Lyon, France ,grid.7849.20000 0001 2150 7757Université de Lyon, Université Claude Bernard Lyon 1, Villeurbanne, France
| | - William Danjou
- grid.413852.90000 0001 2163 3825Service de Médecine Intensive Réanimation, Hôpital de la Croix Rousse, Hospices Civils de Lyon, 103 Grande Rue de La Croix Rousse, 69004 Lyon, France
| | - Alwin Bazzani
- grid.413852.90000 0001 2163 3825Service de Médecine Intensive Réanimation, Hôpital de la Croix Rousse, Hospices Civils de Lyon, 103 Grande Rue de La Croix Rousse, 69004 Lyon, France
| | - Florian Sigaud
- grid.410529.b0000 0001 0792 4829Service de Médecine-Intensive Réanimation, CHU Grenoble-Alpes, Grenoble, France
| | - Sam Bayat
- grid.450307.50000 0001 0944 2786Synchrotron Radiation for Biomedicine Laboratory (STROBE), INSERM UA07, Univ. Grenoble Alpes, Grenoble, France ,grid.410529.b0000 0001 0792 4829Department of Pulmonology and Physiology, Grenoble University Hospital, Grenoble, France
| | - Nicolas Terzi
- grid.411154.40000 0001 2175 0984Maladies Infectieuses et Réanimation Médicale, CHU Rennes, Rennes, France ,grid.410368.80000 0001 2191 9284Faculté de Médecine, Biosit, Université Rennes1, Rennes, France ,grid.410368.80000 0001 2191 9284INSERM-CIC-1414, Faculté de Médecine, IFR 140, Université Rennes I, Rennes, France
| | - Mehdi Girard
- grid.413852.90000 0001 2163 3825Service de Médecine Intensive Réanimation, Hôpital de la Croix Rousse, Hospices Civils de Lyon, 103 Grande Rue de La Croix Rousse, 69004 Lyon, France
| | - Laurent Bitker
- grid.7849.20000 0001 2150 7757Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, INSERM, CREATIS UMR 5220, U1294, Université de Lyon, Villeurbanne, France ,grid.413852.90000 0001 2163 3825Service de Médecine Intensive Réanimation, Hôpital de la Croix Rousse, Hospices Civils de Lyon, 103 Grande Rue de La Croix Rousse, 69004 Lyon, France
| | - Emmanuel Roux
- grid.7849.20000 0001 2150 7757Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, INSERM, CREATIS UMR 5220, U1294, Université de Lyon, Villeurbanne, France
| | - Jean-Christophe Richard
- Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, INSERM, CREATIS UMR 5220, U1294, Université de Lyon, Villeurbanne, France. .,Service de Médecine Intensive Réanimation, Hôpital de la Croix Rousse, Hospices Civils de Lyon, 103 Grande Rue de La Croix Rousse, 69004, Lyon, France.
| |
Collapse
|
35
|
Zhu X, Ding M, Zhang X. Free form deformation and symmetry constraint‐based multi‐modal brain image registration using generative adversarial nets. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2023. [DOI: 10.1049/cit2.12159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/13/2023] Open
Affiliation(s)
- Xingxing Zhu
- Department of Biomedical Engineering School of Life Science and Technology Ministry of Education Key Laboratory of Molecular Biophysics Huazhong University of Science and Technology Wuhan China
| | - Mingyue Ding
- Department of Biomedical Engineering School of Life Science and Technology Ministry of Education Key Laboratory of Molecular Biophysics Huazhong University of Science and Technology Wuhan China
| | - Xuming Zhang
- Department of Biomedical Engineering School of Life Science and Technology Ministry of Education Key Laboratory of Molecular Biophysics Huazhong University of Science and Technology Wuhan China
| |
Collapse
|
36
|
Baroudi H, Brock KK, Cao W, Chen X, Chung C, Court LE, El Basha MD, Farhat M, Gay S, Gronberg MP, Gupta AC, Hernandez S, Huang K, Jaffray DA, Lim R, Marquez B, Nealon K, Netherton TJ, Nguyen CM, Reber B, Rhee DJ, Salazar RM, Shanker MD, Sjogreen C, Woodland M, Yang J, Yu C, Zhao Y. Automated Contouring and Planning in Radiation Therapy: What Is 'Clinically Acceptable'? Diagnostics (Basel) 2023; 13:diagnostics13040667. [PMID: 36832155 PMCID: PMC9955359 DOI: 10.3390/diagnostics13040667] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Revised: 01/21/2023] [Accepted: 01/30/2023] [Indexed: 02/12/2023] Open
Abstract
Developers and users of artificial-intelligence-based tools for automatic contouring and treatment planning in radiotherapy are expected to assess clinical acceptability of these tools. However, what is 'clinical acceptability'? Quantitative and qualitative approaches have been used to assess this ill-defined concept, all of which have advantages and disadvantages or limitations. The approach chosen may depend on the goal of the study as well as on available resources. In this paper, we discuss various aspects of 'clinical acceptability' and how they can move us toward a standard for defining clinical acceptability of new autocontouring and planning tools.
Collapse
Affiliation(s)
- Hana Baroudi
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Kristy K. Brock
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Wenhua Cao
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Xinru Chen
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Caroline Chung
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Laurence E. Court
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Correspondence:
| | - Mohammad D. El Basha
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Maguy Farhat
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Skylar Gay
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Mary P. Gronberg
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Aashish Chandra Gupta
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Soleil Hernandez
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Kai Huang
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - David A. Jaffray
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Rebecca Lim
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Barbara Marquez
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Kelly Nealon
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Tucker J. Netherton
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Callistus M. Nguyen
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Brandon Reber
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Dong Joo Rhee
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Ramon M. Salazar
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Mihir D. Shanker
- The University of Queensland, Saint Lucia 4072, Australia
- The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Carlos Sjogreen
- Department of Physics, University of Houston, Houston, TX 77004, USA
| | - McKell Woodland
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Computer Science, Rice University, Houston, TX 77005, USA
| | - Jinzhong Yang
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Cenji Yu
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Yao Zhao
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| |
Collapse
|
37
|
Zhang B, Wang Y, Ding C, Deng Z, Li L, Qin Z, Ding Z, Bian L, Yang C. Multi-scale feature pyramid fusion network for medical image segmentation. Int J Comput Assist Radiol Surg 2023; 18:353-365. [PMID: 36042149 DOI: 10.1007/s11548-022-02738-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 08/11/2022] [Indexed: 02/03/2023]
Abstract
PURPOSE Medical image segmentation is the most widely used technique in diagnostic and clinical research. However, accurate segmentation of target organs from blurred border regions and low-contrast adjacent organs in Computed tomography (CT) imaging is crucial for clinical diagnosis and treatment. METHODS In this article, we propose a Multi-Scale Feature Pyramid Fusion Network (MS-Net) based on the codec structure formed by the combination of Multi-Scale Attention Module (MSAM) and Stacked Feature Pyramid Module (SFPM). Among them, MSAM is used to skip connections, which aims to extract different levels of context details by dynamically adjusting the receptive fields under different network depths; the SFPM including multi-scale strategies and multi-layer Feature Perception Module (FPM) is nested in the network at the deepest point, which aims to better focus the network's attention on the target organ by adaptively increasing the weight of the features of interest. RESULTS Experiments demonstrate that the proposed MS-Net significantly improved the Dice score from 91.74% to 94.54% on CHAOS, from 97.59% to 98.59% on Lung, and from 82.55% to 86.06% on ISIC 2018, compared with U-Net. Additionally, comparisons with other six state-of-the-art codec structures also show the presented network has great advantages on evaluation indicators such as Miou, Dice, ACC and AUC. CONCLUSION The experimental results show that both the MSAM and SFPM techniques proposed in this paper can assist the network to improve the segmentation effect, so that the proposed MS-Net method achieves better results in the CHAOS, Lung and ISIC 2018 segmentation tasks.
Collapse
Affiliation(s)
- Bing Zhang
- Power Systems Engineering Research Center, Ministry of Education, College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China
| | - Yang Wang
- Power Systems Engineering Research Center, Ministry of Education, College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China
| | - Caifu Ding
- Power Systems Engineering Research Center, Ministry of Education, College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China
| | - Ziqing Deng
- Power Systems Engineering Research Center, Ministry of Education, College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China
| | - Linwei Li
- Power Systems Engineering Research Center, Ministry of Education, College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China
| | - Zesheng Qin
- Power Systems Engineering Research Center, Ministry of Education, College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China
| | - Zhao Ding
- Power Systems Engineering Research Center, Ministry of Education, College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China
| | - Lifeng Bian
- Frontier Institute of Chip and System, Fudan University, Shanghai, 200433, China.
| | - Chen Yang
- Power Systems Engineering Research Center, Ministry of Education, College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China.
| |
Collapse
|
38
|
Kushnure DT, Tyagi S, Talbar SN. LiM-Net: Lightweight multi-level multiscale network with deep residual learning for automatic liver segmentation in CT images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
39
|
Bilic P, Christ P, Li HB, Vorontsov E, Ben-Cohen A, Kaissis G, Szeskin A, Jacobs C, Mamani GEH, Chartrand G, Lohöfer F, Holch JW, Sommer W, Hofmann F, Hostettler A, Lev-Cohain N, Drozdzal M, Amitai MM, Vivanti R, Sosna J, Ezhov I, Sekuboyina A, Navarro F, Kofler F, Paetzold JC, Shit S, Hu X, Lipková J, Rempfler M, Piraud M, Kirschke J, Wiestler B, Zhang Z, Hülsemeyer C, Beetz M, Ettlinger F, Antonelli M, Bae W, Bellver M, Bi L, Chen H, Chlebus G, Dam EB, Dou Q, Fu CW, Georgescu B, Giró-I-Nieto X, Gruen F, Han X, Heng PA, Hesser J, Moltz JH, Igel C, Isensee F, Jäger P, Jia F, Kaluva KC, Khened M, Kim I, Kim JH, Kim S, Kohl S, Konopczynski T, Kori A, Krishnamurthi G, Li F, Li H, Li J, Li X, Lowengrub J, Ma J, Maier-Hein K, Maninis KK, Meine H, Merhof D, Pai A, Perslev M, Petersen J, Pont-Tuset J, Qi J, Qi X, Rippel O, Roth K, Sarasua I, Schenk A, Shen Z, Torres J, Wachinger C, Wang C, Weninger L, Wu J, Xu D, Yang X, Yu SCH, Yuan Y, Yue M, Zhang L, Cardoso J, Bakas S, Braren R, Heinemann V, Pal C, Tang A, Kadoury S, Soler L, van Ginneken B, Greenspan H, Joskowicz L, Menze B. The Liver Tumor Segmentation Benchmark (LiTS). Med Image Anal 2023; 84:102680. [PMID: 36481607 PMCID: PMC10631490 DOI: 10.1016/j.media.2022.102680] [Citation(s) in RCA: 61] [Impact Index Per Article: 61.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 09/27/2022] [Accepted: 10/29/2022] [Indexed: 11/18/2022]
Abstract
In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.
Collapse
Affiliation(s)
- Patrick Bilic
- Department of Informatics, Technical University of Munich, Germany
| | - Patrick Christ
- Department of Informatics, Technical University of Munich, Germany
| | - Hongwei Bran Li
- Department of Informatics, Technical University of Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Switzerland.
| | | | - Avi Ben-Cohen
- Department of Biomedical Engineering, Tel-Aviv University, Israel
| | - Georgios Kaissis
- Institute for AI in Medicine, Technical University of Munich, Germany; Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Department of Computing, Imperial College London, London, United Kingdom
| | - Adi Szeskin
- School of Computer Science and Engineering, the Hebrew University of Jerusalem, Israel
| | - Colin Jacobs
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Gabriel Chartrand
- The University of Montréal Hospital Research Centre (CRCHUM) Montréal, Québec, Canada
| | - Fabian Lohöfer
- Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany
| | - Julian Walter Holch
- Department of Medicine III, University Hospital, LMU Munich, Munich, Germany; Comprehensive Cancer Center Munich, Munich, Germany; Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Wieland Sommer
- Department of Radiology, University Hospital, LMU Munich, Germany
| | - Felix Hofmann
- Department of General, Visceral and Transplantation Surgery, University Hospital, LMU Munich, Germany; Department of Radiology, University Hospital, LMU Munich, Germany
| | - Alexandre Hostettler
- Department of Surgical Data Science, Institut de Recherche contre les Cancers de l'Appareil Digestif (IRCAD), France
| | - Naama Lev-Cohain
- Department of Radiology, Hadassah University Medical Center, Jerusalem, Israel
| | | | | | | | - Jacob Sosna
- Department of Radiology, Hadassah University Medical Center, Jerusalem, Israel
| | - Ivan Ezhov
- Department of Informatics, Technical University of Munich, Germany
| | - Anjany Sekuboyina
- Department of Informatics, Technical University of Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Switzerland
| | - Fernando Navarro
- Department of Informatics, Technical University of Munich, Germany; Department of Radiation Oncology and Radiotherapy, Klinikum rechts der Isar, Technical University of Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Germany
| | - Florian Kofler
- Department of Informatics, Technical University of Munich, Germany; Institute for diagnostic and interventional neuroradiology, Klinikum rechts der Isar,Technical University of Munich, Germany; Helmholtz AI, Helmholtz Zentrum München, Neuherberg, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Germany
| | - Johannes C Paetzold
- Department of Computing, Imperial College London, London, United Kingdom; Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Zentrum München, Neuherberg, Germany
| | - Suprosanna Shit
- Department of Informatics, Technical University of Munich, Germany
| | - Xiaobin Hu
- Department of Informatics, Technical University of Munich, Germany
| | - Jana Lipková
- Brigham and Women's Hospital, Harvard Medical School, USA
| | - Markus Rempfler
- Department of Informatics, Technical University of Munich, Germany
| | - Marie Piraud
- Department of Informatics, Technical University of Munich, Germany; Helmholtz AI, Helmholtz Zentrum München, Neuherberg, Germany
| | - Jan Kirschke
- Institute for diagnostic and interventional neuroradiology, Klinikum rechts der Isar,Technical University of Munich, Germany
| | - Benedikt Wiestler
- Institute for diagnostic and interventional neuroradiology, Klinikum rechts der Isar,Technical University of Munich, Germany
| | - Zhiheng Zhang
- Department of Hepatobiliary Surgery, the Affiliated Drum Tower Hospital of Nanjing University Medical School, China
| | | | - Marcel Beetz
- Department of Informatics, Technical University of Munich, Germany
| | | | - Michela Antonelli
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | | | | | - Lei Bi
- School of Computer Science, the University of Sydney, Australia
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, China
| | - Grzegorz Chlebus
- Fraunhofer MEVIS, Bremen, Germany; Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Erik B Dam
- Department of Computer Science, University of Copenhagen, Denmark
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Chi-Wing Fu
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | | | - Xavier Giró-I-Nieto
- Signal Theory and Communications Department, Universitat Politecnica de Catalunya, Catalonia, Spain
| | - Felix Gruen
- Institute of Control Engineering, Technische Universität Braunschweig, Germany
| | - Xu Han
- Department of computer science, UNC Chapel Hill, USA
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Jürgen Hesser
- Mannheim Institute for Intelligent Systems in Medicine, department of Medicine Mannheim, Heidelberg University, Germany; Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Germany; Central Institute for Computer Engineering (ZITI), Heidelberg University, Germany
| | | | - Christian Igel
- Department of Computer Science, University of Copenhagen, Denmark
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany; Helmholtz Imaging, Germany
| | - Paul Jäger
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany; Helmholtz Imaging, Germany
| | - Fucang Jia
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China
| | - Krishna Chaitanya Kaluva
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | - Mahendra Khened
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | | | - Jae-Hun Kim
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, South Korea
| | | | - Simon Kohl
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tomasz Konopczynski
- Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Germany
| | - Avinash Kori
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | - Ganapathy Krishnamurthi
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | - Fan Li
- Sensetime, Shanghai, China
| | - Hongchao Li
- Department of Computer Science, Guangdong University of Foreign Studies, China
| | - Junbo Li
- Philips Research China, Philips China Innovation Campus, Shanghai, China
| | - Xiaomeng Li
- Department of Electrical and Electronic Engineering, The University of Hong Kong, China
| | - John Lowengrub
- Departments of Mathematics, Biomedical Engineering, University of California, Irvine, USA; Center for Complex Biological Systems, University of California, Irvine, USA; Chao Family Comprehensive Cancer Center, University of California, Irvine, USA
| | - Jun Ma
- Department of Mathematics, Nanjing University of Science and Technology, China
| | - Klaus Maier-Hein
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany; Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany; Helmholtz Imaging, Germany
| | | | - Hans Meine
- Fraunhofer MEVIS, Bremen, Germany; Medical Image Computing Group, FB3, University of Bremen, Germany
| | - Dorit Merhof
- Institute of Imaging & Computer Vision, RWTH Aachen University, Germany
| | - Akshay Pai
- Department of Computer Science, University of Copenhagen, Denmark
| | - Mathias Perslev
- Department of Computer Science, University of Copenhagen, Denmark
| | - Jens Petersen
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jordi Pont-Tuset
- Eidgenössische Technische Hochschule Zurich (ETHZ), Zurich, Switzerland
| | - Jin Qi
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, China
| | - Xiaojuan Qi
- Department of Electrical and Electronic Engineering, The University of Hong Kong, China
| | - Oliver Rippel
- Institute of Imaging & Computer Vision, RWTH Aachen University, Germany
| | | | - Ignacio Sarasua
- Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-Universität, Munich, Germany
| | - Andrea Schenk
- Fraunhofer MEVIS, Bremen, Germany; Institute for Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany
| | - Zengming Shen
- Beckman Institute, University of Illinois at Urbana-Champaign, USA; Siemens Healthineers, USA
| | - Jordi Torres
- Barcelona Supercomputing Center, Barcelona, Spain; Universitat Politecnica de Catalunya, Catalonia, Spain
| | - Christian Wachinger
- Department of Informatics, Technical University of Munich, Germany; Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-Universität, Munich, Germany
| | - Chunliang Wang
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Sweden
| | - Leon Weninger
- Institute of Imaging & Computer Vision, RWTH Aachen University, Germany
| | - Jianrong Wu
- Tencent Healthcare (Shenzhen) Co., Ltd, China
| | | | - Xiaoping Yang
- Department of Mathematics, Nanjing University, China
| | - Simon Chun-Ho Yu
- Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China
| | - Yading Yuan
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, NY, USA
| | - Miao Yue
- CGG Services (Singapore) Pte. Ltd., Singapore
| | - Liping Zhang
- Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China
| | - Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, PA, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, USA; Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, PA, USA
| | - Rickmer Braren
- German Cancer Consortium (DKTK), Germany; Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Comprehensive Cancer Center Munich, Munich, Germany
| | - Volker Heinemann
- Department of Hematology/Oncology & Comprehensive Cancer Center Munich, LMU Klinikum Munich, Germany
| | | | - An Tang
- Department of Radiology, Radiation Oncology and Nuclear Medicine, University of Montréal, Canada
| | | | - Luc Soler
- Department of Surgical Data Science, Institut de Recherche contre les Cancers de l'Appareil Digestif (IRCAD), France
| | - Bram van Ginneken
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Hayit Greenspan
- Department of Biomedical Engineering, Tel-Aviv University, Israel
| | - Leo Joskowicz
- School of Computer Science and Engineering, the Hebrew University of Jerusalem, Israel
| | - Bjoern Menze
- Department of Informatics, Technical University of Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Switzerland
| |
Collapse
|
40
|
Liu H, Gong G, Zou W, Hu N, Wang J. Topologically preserved registration of 3D CT images with deep networks. Phys Med Biol 2023; 68. [PMID: 36623316 DOI: 10.1088/1361-6560/acb197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 01/09/2023] [Indexed: 01/11/2023]
Abstract
Objective. Computed Tomography (CT) image registration makes fast and accurate imaging-based disease diagnosis possible. We aim to develop a framework which can perform accurate local registration of organs in 3D CT images while preserving the topology of transformation.Approach. In this framework, the Faster R-CNN method is first used to detect local areas containing organs from fixed and moving images whose results are then registered with a weakly supervised deep neural network. In this network, a novel 3D channel coordinate attention (CA) module is introduced to reduce the loss of position information. The image edge loss and the organ labelling loss are used to weakly supervise the training process of our deep network, which enables the network learning to focus on registering organs and image structures. An intuitive inverse module is also used to reduce the folding of deformation field. More specifically, the folding is suppressed directly by simultaneously maximizing forward and backward registration accuracy in the image domain rather than indirectly by measuring the consistency of forward and inverse deformation fields as usual.Main results. Our method achieves an average dice similarity coefficient (DSC) of 0.954 and an average Similarity (Sim) of 0.914 on publicly available liver datasets (LiTS for training and Sliver07 for testing) and achieves an average DSC of 0.914 and an average Sim of 0.947 on our home-built left ventricular myocardium (LVM) dataset.Significance. Experimental results show that our proposed method can significantly improve the registration accuracy of organs such as the liver and LVM. Moreover, our inverse module can intuitively improve the inherent topological preservation of transformations.
Collapse
Affiliation(s)
- Huaying Liu
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, People's Republic of China
| | - Guanzhong Gong
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250000, People's Republic of China
| | - Wei Zou
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, People's Republic of China
| | - Nan Hu
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, People's Republic of China
| | - Jiajun Wang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, People's Republic of China
| |
Collapse
|
41
|
Wang S, Pang X, de Keyzer F, Feng Y, Swinnen JV, Yu J, Ni Y. AI-based MRI auto-segmentation of brain tumor in rodents, a multicenter study. Acta Neuropathol Commun 2023; 11:11. [PMID: 36641470 PMCID: PMC9840251 DOI: 10.1186/s40478-023-01509-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Accepted: 01/06/2023] [Indexed: 01/15/2023] Open
Abstract
Automatic segmentation of rodent brain tumor on magnetic resonance imaging (MRI) may facilitate biomedical research. The current study aims to prove the feasibility for automatic segmentation by artificial intelligence (AI), and practicability of AI-assisted segmentation. MRI images, including T2WI, T1WI and CE-T1WI, of brain tumor from 57 WAG/Rij rats in KU Leuven and 46 mice from the cancer imaging archive (TCIA) were collected. A 3D U-Net architecture was adopted for segmentation of tumor bearing brain and brain tumor. After training, these models were tested with both datasets after Gaussian noise addition. Reduction of inter-observer disparity by AI-assisted segmentation was also evaluated. The AI model segmented tumor-bearing brain well for both Leuven and TCIA datasets, with Dice similarity coefficients (DSCs) of 0.87 and 0.85 respectively. After noise addition, the performance remained unchanged when the signal-noise ratio (SNR) was higher than two or eight, respectively. For the segmentation of tumor lesions, AI-based model yielded DSCs of 0.70 and 0.61 for Leuven and TCIA datasets respectively. Similarly, the performance is uncompromised when the SNR was over two and eight respectively. AI-assisted segmentation could significantly reduce the inter-observer disparities and segmentation time in both rats and mice. Both AI models for segmenting brain or tumor lesions could improve inter-observer agreement and therefore contributed to the standardization of the following biomedical studies.
Collapse
Affiliation(s)
- Shuncong Wang
- grid.5596.f0000 0001 0668 7884Biomedical Group, Campus Gasthuisberg, KU Leuven, 3000 Leuven, Belgium
| | - Xin Pang
- grid.5596.f0000 0001 0668 7884Biomedical Group, Campus Gasthuisberg, KU Leuven, 3000 Leuven, Belgium ,grid.5596.f0000 0001 0668 7884Faculty of Economics and Business, KU Leuven, 3000 Leuven, Belgium
| | - Frederik de Keyzer
- grid.5596.f0000 0001 0668 7884Department of Radiology, University Hospitals Leuven, KU Leuven, Herestraat 49, 3000 Leuven, Belgium
| | - Yuanbo Feng
- grid.5596.f0000 0001 0668 7884Biomedical Group, Campus Gasthuisberg, KU Leuven, 3000 Leuven, Belgium
| | - Johan V. Swinnen
- grid.5596.f0000 0001 0668 7884Biomedical Group, Campus Gasthuisberg, KU Leuven, 3000 Leuven, Belgium
| | - Jie Yu
- grid.5596.f0000 0001 0668 7884Biomedical Group, Campus Gasthuisberg, KU Leuven, 3000 Leuven, Belgium
| | - Yicheng Ni
- grid.5596.f0000 0001 0668 7884Biomedical Group, Campus Gasthuisberg, KU Leuven, 3000 Leuven, Belgium
| |
Collapse
|
42
|
Zhou R, Ou Y, Fang X, Azarpazhooh MR, Gan H, Ye Z, Spence JD, Xu X, Fenster A. Ultrasound carotid plaque segmentation via image reconstruction-based self-supervised learning with limited training labels. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:1617-1636. [PMID: 36899501 DOI: 10.3934/mbe.2023074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Carotid total plaque area (TPA) is an important contributing measurement to the evaluation of stroke risk. Deep learning provides an efficient method for ultrasound carotid plaque segmentation and TPA quantification. However, high performance of deep learning requires datasets with many labeled images for training, which is very labor-intensive. Thus, we propose an image reconstruction-based self-supervised learning algorithm (IR-SSL) for carotid plaque segmentation when few labeled images are available. IR-SSL consists of pre-trained and downstream segmentation tasks. The pre-trained task learns region-wise representations with local consistency by reconstructing plaque images from randomly partitioned and disordered images. The pre-trained model is then transferred to the segmentation network as the initial parameters in the downstream task. IR-SSL was implemented with two networks, UNet++ and U-Net, and evaluated on two independent datasets of 510 carotid ultrasound images from 144 subjects at SPARC (London, Canada) and 638 images from 479 subjects at Zhongnan hospital (Wuhan, China). Compared to the baseline networks, IR-SSL improved the segmentation performance when trained on few labeled images (n = 10, 30, 50 and 100 subjects). For 44 SPARC subjects, IR-SSL yielded Dice-similarity-coefficients (DSC) of 80.14-88.84%, and algorithm TPAs were strongly correlated (r=0.962-0.993, p < 0.001) with manual results. The models trained on the SPARC images but applied to the Zhongnan dataset without retraining achieved DSCs of 80.61-88.18% and strong correlation with manual segmentation (r=0.852-0.978, p < 0.001). These results suggest that IR-SSL could improve deep learning when trained on small labeled datasets, making it useful for monitoring carotid plaque progression/regression in clinical use and trials.
Collapse
Affiliation(s)
- Ran Zhou
- School of Computer Science, Hubei University of Technology, Wuhan, China
| | - Yanghan Ou
- School of Computer Science, Hubei University of Technology, Wuhan, China
| | - Xiaoyue Fang
- School of Computer Science, Hubei University of Technology, Wuhan, China
| | | | - Haitao Gan
- School of Computer Science, Hubei University of Technology, Wuhan, China
| | - Zhiwei Ye
- School of Computer Science, Hubei University of Technology, Wuhan, China
| | - J David Spence
- Robarts Research Institute, Western University, London, Canada
| | - Xiangyang Xu
- Liyuan Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Aaron Fenster
- Robarts Research Institute, Western University, London, Canada
| |
Collapse
|
43
|
Wang J, Zhang X, Guo L, Shi C, Tamura S. Multi-scale attention and deep supervision-based 3D UNet for automatic liver segmentation from CT. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:1297-1316. [PMID: 36650812 DOI: 10.3934/mbe.2023059] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
BACKGROUND Automatic liver segmentation is a prerequisite for hepatoma treatment; however, the low accuracy and stability hinder its clinical application. To alleviate this limitation, we deeply mine the context information of different scales and combine it with deep supervision to improve the accuracy of liver segmentation in this paper. METHODS We proposed a new network called MAD-UNet for automatic liver segmentation from CT. It is grounded in the 3D UNet and leverages multi-scale attention and deep supervision mechanisms. In the encoder, the downsampling pooling in 3D UNet is replaced by convolution to alleviate the loss of feature information. Meanwhile, the residual module is introduced to avoid gradient vanishment. Besides, we use the long-short skip connections (LSSC) to replace the ordinary skip connections to preserve more edge detail. In the decoder, the features of different scales are aggregated, and the attention module is employed to capture the spatial context information. Moreover, we utilized the deep supervision mechanism to improve the learning ability on deep and shallow information. RESULTS We evaluated the proposed method on three public datasets, including, LiTS17, SLiver07, and 3DIRCADb, and obtained Dice scores of 0.9727, 0.9752, and 0.9691 for liver segmentation, respectively, which outperform the other state-of-the-art (SOTA) methods. CONCLUSIONS Both qualitative and quantitative experimental results demonstrate that the proposed method can make full use of the feature information of different stages while enhancing spatial data's learning ability, thereby achieving high liver segmentation accuracy. Thus, it proved to be a promising tool for automatic liver segmentation in clinical assistance.
Collapse
Affiliation(s)
- Jinke Wang
- Department of Software Engineering, Harbin University of Science and Technology, Rongcheng 264300, China
- School of Automation, Harbin University of Science and Technology, Harbin 150080, China
| | - Xiangyang Zhang
- School of Automation, Harbin University of Science and Technology, Harbin 150080, China
| | - Liang Guo
- School of Automation, Harbin University of Science and Technology, Harbin 150080, China
| | - Changfa Shi
- Mobile E-business Collaborative Innovation Center of Hunan Province, Hunan University of Technology and Business, Changsha 410205, China
| | | |
Collapse
|
44
|
Wang J, Zhang L, Zhang Y. Mixture 2D Convolutions for 3D Medical Image Segmentation. Int J Neural Syst 2023; 33:2250059. [PMID: 36328969 DOI: 10.1142/s0129065722500599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Three-dimensional (3D) medical image segmentation plays a crucial role in medical care applications. Although various two-dimensional (2D) and 3D neural network models have been applied to 3D medical image segmentation and achieved impressive results, a trade-off remains between efficiency and accuracy. To address this issue, a novel mixture convolutional network (MixConvNet) is proposed, in which traditional 2D/3D convolutional blocks are replaced with novel MixConv blocks. In the MixConv block, 3D convolution is decomposed into a mixture of 2D convolutions from different views. Therefore, the MixConv block fully utilizes the advantages of 2D convolution and maintains the learning ability of 3D convolution. It acts as 3D convolutions and thus can process volumetric input directly and learn intra-slice features, which are absent in the traditional 2D convolutional block. By contrast, the proposed MixConv block only contains 2D convolutions; hence, it has significantly fewer trainable parameters and less computation budget than a block containing 3D convolutions. Furthermore, the proposed MixConvNet is pre-trained with small input patches and fine-tuned with large input patches to improve segmentation performance further. In experiments on the Decathlon Heart dataset and Sliver07 dataset, the proposed MixConvNet outperformed the state-of-the-art methods such as UNet3D, VNet, and nnUnet.
Collapse
Affiliation(s)
- Jianyong Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, Sichuan, P. R. China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, Sichuan, P. R. China
| | - Yi Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, Sichuan, P. R. China
| |
Collapse
|
45
|
Kong Z, Zhang M, Zhu W, Yi Y, Wang T, Zhang B. Data enhancement based on M2-Unet for liver segmentation in Computed Tomography. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
46
|
Marschner S, Datarb M, Gaasch A, Xu Z, Grbic S, Chabin G, Geiger B, Rosenman J, Corradini S, Niyazi M, Heimann T, Möhler C, Vega F, Belka C, Thieke C. A deep image-to-image network organ segmentation algorithm for radiation treatment planning: principles and evaluation. Radiat Oncol 2022; 17:129. [PMID: 35869525 PMCID: PMC9308364 DOI: 10.1186/s13014-022-02102-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 06/28/2022] [Indexed: 01/02/2023] Open
Abstract
Background We describe and evaluate a deep network algorithm which automatically contours organs at risk in the thorax and pelvis on computed tomography (CT) images for radiation treatment planning. Methods The algorithm identifies the region of interest (ROI) automatically by detecting anatomical landmarks around the specific organs using a deep reinforcement learning technique. The segmentation is restricted to this ROI and performed by a deep image-to-image network (DI2IN) based on a convolutional encoder-decoder architecture combined with multi-level feature concatenation. The algorithm is commercially available in the medical products “syngo.via RT Image Suite VB50” and “AI-Rad Companion Organs RT VA20” (Siemens Healthineers). For evaluation, thoracic CT images of 237 patients and pelvic CT images of 102 patients were manually contoured following the Radiation Therapy Oncology Group (RTOG) guidelines and compared to the DI2IN results using metrics for volume, overlap and distance, e.g., Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD95). The contours were also compared visually slice by slice. Results We observed high correlations between automatic and manual contours. The best results were obtained for the lungs (DSC 0.97, HD95 2.7 mm/2.9 mm for left/right lung), followed by heart (DSC 0.92, HD95 4.4 mm), bladder (DSC 0.88, HD95 6.7 mm) and rectum (DSC 0.79, HD95 10.8 mm). Visual inspection showed excellent agreements with some exceptions for heart and rectum. Conclusions The DI2IN algorithm automatically generated contours for organs at risk close to those by a human expert, making the contouring step in radiation treatment planning simpler and faster. Few cases still required manual corrections, mainly for heart and rectum.
Collapse
|
47
|
Liu X, Elbanan MG, Luna A, Haider MA, Smith AD, Sabottke CF, Spieler BM, Turkbey B, Fuentes D, Moawad A, Kamel S, Horvat N, Elsayes KM. Radiomics in Abdominopelvic Solid-Organ Oncologic Imaging: Current Status. AJR Am J Roentgenol 2022; 219:985-995. [PMID: 35766531 PMCID: PMC10616929 DOI: 10.2214/ajr.22.27695] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Radiomics is the process of extraction of high-throughput quantitative imaging features from medical images. These features represent noninvasive quantitative biomarkers that go beyond the traditional imaging features visible to the human eye. This article first reviews the steps of the radiomics pipeline, including image acquisition, ROI selection and image segmentation, image preprocessing, feature extraction, feature selection, and model development and application. Current evidence for the application of radiomics in abdominopelvic solid-organ cancers is then reviewed. Applications including diagnosis, subtype determination, treatment response assessment, and outcome prediction are explored within the context of hepatobiliary and pancreatic cancer, renal cell carcinoma, prostate cancer, gynecologic cancer, and adrenal masses. This literature review focuses on the strongest available evidence, including systematic reviews, meta-analyses, and large multicenter studies. Limitations of the available literature are highlighted, including marked heterogeneity in radiomics methodology, frequent use of small sample sizes with high risk of overfitting, and lack of prospective design, external validation, and standardized radiomics workflow. Thus, although studies have laid a foundation that supports continued investigation into radiomics models, stronger evidence is needed before clinical adoption.
Collapse
Affiliation(s)
- Xiaoyang Liu
- Joint Department of Medical Imaging, Division of Abdominal Imaging, University Health Network, University of Toronto, ON, Canada
| | - Mohamed G Elbanan
- Department of Radiology, Yale New Haven Health, Bridgeport Hospital, Bridgeport, CT
| | | | - Masoom A Haider
- Lunenfeld-Tanenbaum Research Institute, Sinai Health System, Toronto, ON, Canada
- Joint Department of Medical Imaging, University Health Network, Sinai Health System and University of Toronto, Toronto, ON, Canada
| | - Andrew D Smith
- Department of Radiology, University of Alabama at Birmingham, Birmingham, AL
| | - Carl F Sabottke
- Department of Medical Imaging, University of Arizona College of Medicine, Tucson, AZ
| | - Bradley M Spieler
- Department of Radiology, University Medical Center, Louisiana State University Health Sciences Center, New Orleans, LA
| | - Baris Turkbey
- Molecular Imaging Program, National Cancer Institute, NIH, Bethesda, MD
| | - David Fuentes
- Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, TX
| | - Ahmed Moawad
- Department of Diagnostic and Interventional Radiology, Mercy Catholic Medical Center, Darby, PA
| | - Serageldin Kamel
- Department of Lymphoma, University of Texas MD Anderson Cancer Center, Houston, TX
| | - Natally Horvat
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Khaled M Elsayes
- Department of Abdominal Imaging, University of Texas MD Anderson Cancer Center, 1400 Pressler St, Houston, TX 77030
| |
Collapse
|
48
|
Wang J, Zhang X, Lv P, Wang H, Cheng Y. Automatic Liver Segmentation Using EfficientNet and Attention-Based Residual U-Net in CT. J Digit Imaging 2022; 35:1479-1493. [PMID: 35711074 PMCID: PMC9712863 DOI: 10.1007/s10278-022-00668-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 05/30/2022] [Accepted: 06/03/2022] [Indexed: 10/18/2022] Open
Abstract
This paper proposes a new network framework, which leverages EfficientNetB4, attention gate, and residual learning techniques to achieve automatic and accurate liver segmentation. First, we use EfficientNetB4 as the encoder to extract more feature information during the encoding stage. Then, an attention gate is introduced in the skip connection to eliminate irrelevant regions and highlight features of a specific segmentation task. Finally, to alleviate the problem of gradient vanishment, we replace the traditional convolution of the decoder with a residual block to improve the segmentation accuracy. We verified the proposed method on the LiTS17 and SLiver07 datasets and compared it with classical networks such as FCN, U-Net, attention U-Net, and attention Res-U-Net. In the Sliver07 evaluation, the proposed method achieved the best segmentation performance on all five standard metrics. Meanwhile, in the LiTS17 assessment, the best performance is obtained except for a slight inferior on RVD. The proposed method's qualitative and quantitative results demonstrated its applicability in liver segmentation and proved its good prospect in computer-assisted liver segmentation.
Collapse
Affiliation(s)
- Jinke Wang
- Department of Software Engineering, Harbin University of Science and Technology, No. 2006, Xueyuan Road, Shandong Province, Rongcheng City, 264300, China.
- School of Automation, Harbin University of Science and Technology, Harbin, 150080, China.
| | - Xiangyang Zhang
- School of Automation, Harbin University of Science and Technology, Harbin, 150080, China
| | - Peiqing Lv
- School of Automation, Harbin University of Science and Technology, Harbin, 150080, China
| | - Haiying Wang
- School of Automation, Harbin University of Science and Technology, Harbin, 150080, China
| | - Yuanzhi Cheng
- School of Information Science and Technology, Qingdao University of Science and Technology, Qingdao, 266061, China
| |
Collapse
|
49
|
Han T, Wu J, Luo W, Wang H, Jin Z, Qu L. Review of Generative Adversarial Networks in mono- and cross-modal biomedical image registration. Front Neuroinform 2022; 16:933230. [PMID: 36483313 PMCID: PMC9724825 DOI: 10.3389/fninf.2022.933230] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 10/13/2022] [Indexed: 09/19/2023] Open
Abstract
Biomedical image registration refers to aligning corresponding anatomical structures among different images, which is critical to many tasks, such as brain atlas building, tumor growth monitoring, and image fusion-based medical diagnosis. However, high-throughput biomedical image registration remains challenging due to inherent variations in the intensity, texture, and anatomy resulting from different imaging modalities, different sample preparation methods, or different developmental stages of the imaged subject. Recently, Generative Adversarial Networks (GAN) have attracted increasing interest in both mono- and cross-modal biomedical image registrations due to their special ability to eliminate the modal variance and their adversarial training strategy. This paper provides a comprehensive survey of the GAN-based mono- and cross-modal biomedical image registration methods. According to the different implementation strategies, we organize the GAN-based mono- and cross-modal biomedical image registration methods into four categories: modality translation, symmetric learning, adversarial strategies, and joint training. The key concepts, the main contributions, and the advantages and disadvantages of the different strategies are summarized and discussed. Finally, we analyze the statistics of all the cited works from different points of view and reveal future trends for GAN-based biomedical image registration studies.
Collapse
Affiliation(s)
- Tingting Han
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Jun Wu
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Wenting Luo
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Huiming Wang
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Zhe Jin
- School of Artificial Intelligence, Anhui University, Hefei, China
| | - Lei Qu
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| |
Collapse
|
50
|
Jin R, Wang M, Xu L, Lu J, Song E, Ma G. Automatic 3D CT liver segmentation based on fast global minimization of probabilistic active contour. Med Phys 2022; 50:2100-2120. [PMID: 36413182 DOI: 10.1002/mp.16116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 10/27/2022] [Accepted: 11/05/2022] [Indexed: 11/23/2022] Open
Abstract
PURPOSE Automatic liver segmentation from computed tomography (CT) images is an essential preprocessing step for computer-aided diagnosis of liver diseases. However, due to the large differences in liver shapes, low-contrast to adjacent tissues, and existence of tumors or other abnormalities, liver segmentation has been very challenging. This study presents an accurate and fast liver segmentation method based on a novel probabilistic active contour (PAC) model and its fast global minimization scheme (3D-FGMPAC), which is explainable as compared with deep learning methods. METHODS The proposed method first constructs a slice-indexed-histogram to localize the volume of interest (VOI) and estimate the probability that a voxel belongs to the liver according its intensity. The probabilistic image would be used to initialize the 3D PAC model. Secondly, a new contour indicator function, which is a component of the model, is produced by combining the gradient-based edge detection and Hessian-matrix-based surface detection. Then, a fast numerical scheme derived for the 3D PAC model is performed to evolve the initial probabilistic image into the global minimizer of the model, which is a smoothed probabilistic image showing a distinctly highlighted liver. Next, a simple region-growing strategy is applied to extract the whole liver mask from the smoothed probabilistic image. Finally, a B-spline surface is constructed to fit the patch of the rib cage to prevent possible leakage into adjacent intercostal tissues. RESULTS The proposed method is evaluated on two public datasets. The average Dice score, volume overlap error, volume difference, symmetric surface distance and volume processing time are 0.96, 7.35%, 0.02%, 1.17 mm and 19.8 s for the Sliver07 dataset, and 0.95, 8.89%, - 0.02 % $-0.02\%$ , 1.45 mm and 23.08 s for the 3Dircadb dataset, respectively. CONCLUSIONS The proposed fully-automatic approach can effectively segment the liver from low-contrast and complex backgrounds. The quantitative and qualitative results demonstrate that the proposed segmentation method outperforms state-of-the-art traditional automatic liver segmentation algorithms and achieves very competitive performance compared with recent deep leaning-based methods.
Collapse
Affiliation(s)
- Renchao Jin
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Manyang Wang
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Lijun Xu
- School of Computer and Information Engineering, Hubei University, Wuhan, China
| | - Jiayi Lu
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Enmin Song
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Guangzhi Ma
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, China
| |
Collapse
|