1
|
Mun SB, Choi ST, Kim YJ, Kim KG, Lee WS. AI-Based 3D Liver Segmentation and Volumetric Analysis in Living Donor Data. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01468-9. [PMID: 40087225 DOI: 10.1007/s10278-025-01468-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2024] [Revised: 02/23/2025] [Accepted: 02/25/2025] [Indexed: 03/17/2025]
Abstract
This study investigated the application of deep learning for 3-dimensional (3D) liver segmentation and volumetric analysis in living donor liver transplantation. Using abdominal computed tomography data from 55 donors, this study aimed to evaluate the liver segmentation performance of various U-Net-based models, including 3D U-Net, RU-Net, DU-Net, and RDU-Net, before and after hepatectomy. Accurate liver volume measurement is critical in liver transplantation to ensure adequate functional recovery and minimize postoperative complications. The models were trained and validated using a fivefold cross-validation approach. Performance metrics such as Dice similarity coefficient (DSC), recall, specificity, precision, and accuracy were used to assess the segmentation results. The highest segmentation accuracy was achieved in preoperative images with a DSC of 95.73 ± 1.08%, while postoperative day 7 images showed the lowest performance with a DSC of 93.14 ± 2.10%. A volumetric analysis conducted to measure hepatic resection and regeneration rates revealed an average liver resection rate of 40.52 ± 8.89% and a regeneration rate of 13.50 ± 8.95% by postoperative day 63. A regression analysis was performed on the volumetric results of the artificial intelligence model's liver resection rate and regeneration rate, and all results were statistically significant at p < 0.0001. The results indicate high reliability and clinical applicability of deep learning models in accurately measuring liver volume and assessing regenerative capacity, thus enhancing the management and recovery of liver donors.
Collapse
Affiliation(s)
- Sae Byeol Mun
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences & Technology, Gachon University, Incheon, 21999, Republic of Korea
- Medical Devices R&D Center, Gachon University Gil Medical Center, Incheon, 21565, Republic of Korea
| | - Sang Tae Choi
- Department of Surgery, Gil Medical Center, Gachon University College of Medicine, Incheon, 21565, Republic of Korea
| | - Young Jae Kim
- Medical Devices R&D Center, Gachon University Gil Medical Center, Incheon, 21565, Republic of Korea
- Gachon Biomedical & Convergence Institute, Gachon University Gil Medical Center, Incheon, 21565, Republic of Korea
| | - Kwang Gi Kim
- Medical Devices R&D Center, Gachon University Gil Medical Center, Incheon, 21565, Republic of Korea.
- Department of Biomedical Engineering, College of IT Convergence, Gachon University, Seongnam-Si, 13120, Republic of Korea.
| | - Won Suk Lee
- Department of Surgery, Gil Medical Center, Gachon University College of Medicine, Incheon, 21565, Republic of Korea.
| |
Collapse
|
2
|
Ghobadi V, Ismail LI, Wan Hasan WZ, Ahmad H, Ramli HR, Norsahperi NMH, Tharek A, Hanapiah FA. Challenges and solutions of deep learning-based automated liver segmentation: A systematic review. Comput Biol Med 2025; 185:109459. [PMID: 39642700 DOI: 10.1016/j.compbiomed.2024.109459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 11/12/2024] [Accepted: 11/19/2024] [Indexed: 12/09/2024]
Abstract
The liver is one of the vital organs in the body. Precise liver segmentation in medical images is essential for liver disease treatment. The deep learning-based liver segmentation process faces several challenges. This research aims to analyze the challenges of liver segmentation in prior studies and identify the modifications made to network models and other enhancements implemented by researchers to tackle each challenge. In total, 88 articles from Scopus and ScienceDirect databases published between January 2016 and January 2022 have been studied. The liver segmentation challenges are classified into five main categories, each containing some subcategories. For each challenge, the proposed technique to overcome the challenge is investigated. The provided report details the authors, publication years, dataset types, imaging technologies, and evaluation metrics of all references for comparison. Additionally, a summary table outlines the challenges and solutions.
Collapse
Affiliation(s)
- Vahideh Ghobadi
- Faculty of Engineering, Universiti Putra Malaysia, Serdang, 43400, Selangor, Malaysia.
| | - Luthffi Idzhar Ismail
- Faculty of Engineering, Universiti Putra Malaysia, Serdang, 43400, Selangor, Malaysia.
| | - Wan Zuha Wan Hasan
- Faculty of Engineering, Universiti Putra Malaysia, Serdang, 43400, Selangor, Malaysia.
| | - Haron Ahmad
- KPJ Specialist Hospital, Damansara Utama, Petaling Jaya, 47400, Selangor, Malaysia.
| | - Hafiz Rashidi Ramli
- Faculty of Engineering, Universiti Putra Malaysia, Serdang, 43400, Selangor, Malaysia.
| | | | - Anas Tharek
- Hospital Sultan Abdul Aziz Shah, University Putra Malaysia, Serdang, 43400, Selangor, Malaysia.
| | - Fazah Akhtar Hanapiah
- Faculty of Medicine, Universiti Teknologi MARA, Damansara Utama, Sungai Buloh, 47000, Selangor, Malaysia.
| |
Collapse
|
3
|
Mustonen H, Isosalo A, Nortunen M, Nevalainen M, Nieminen MT, Huhta H. DLLabelsCT: Annotation tool using deep transfer learning to assist in creating new datasets from abdominal computed tomography scans, case study: Pancreas. PLoS One 2024; 19:e0313126. [PMID: 39625972 PMCID: PMC11614254 DOI: 10.1371/journal.pone.0313126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Accepted: 10/19/2024] [Indexed: 12/06/2024] Open
Abstract
The utilization of artificial intelligence (AI) is expanding significantly within medical research and, to some extent, in clinical practice. Deep learning (DL) applications, which use large convolutional neural networks (CNN), hold considerable potential, especially in optimizing radiological evaluations. However, training DL algorithms to clinical standards requires extensive datasets, and their processing is labor-intensive. In this study, we developed an annotation tool named DLLabelsCT that utilizes CNN models to accelerate the image analysis process. To validate DLLabelsCT, we trained a CNN model with a ResNet34 encoder and a UNet decoder to segment the pancreas on an open-access dataset and used the DL model to assist in annotating a local dataset, which was further used to refine the model. DLLabelsCT was also tested on two external testing datasets. The tool accelerates annotation by 3.4 times compared to a completely manual annotation method. Out of 3,715 CT scan slices in the testing datasets, 50% did not require editing when reviewing the segmentations made by the ResNet34-UNet model, and the mean and standard deviation of the Dice similarity coefficient was 0.82±0.24. DLLabelsCT is highly accurate and significantly saves time and resources. Furthermore, it can be easily modified to support other deep learning models for other organs, making it an efficient tool for future research involving larger datasets.
Collapse
Affiliation(s)
- Henrik Mustonen
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland
| | - Antti Isosalo
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland
| | - Minna Nortunen
- Research Unit of Translational Medicine, Oulu University Hospital, Oulu, Finland
- Department of Surgery, Oulu University Hospital, Oulu, Finland
| | - Mika Nevalainen
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland
- Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland
| | - Miika T. Nieminen
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland
- Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland
| | - Heikki Huhta
- Research Unit of Translational Medicine, Oulu University Hospital, Oulu, Finland
- Department of Surgery, Oulu University Hospital, Oulu, Finland
| |
Collapse
|
4
|
Cebula M, Biernacka A, Bożek O, Kokoszka B, Kazibut S, Kujszczyk A, Kulig-Kulesza M, Modlińska S, Kufel J, Azierski M, Szydło F, Winder M, Pilch-Kowalczyk J, Gruszczyńska K. Evaluation of Various Methods of Liver Measurement in Comparison to Volumetric Segmentation Based on Computed Tomography. J Clin Med 2024; 13:3634. [PMID: 38999200 PMCID: PMC11242708 DOI: 10.3390/jcm13133634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 06/17/2024] [Accepted: 06/17/2024] [Indexed: 07/14/2024] Open
Abstract
Background: A reliable assessment of liver volume, necessary before transplantation, remains a challenge. Our work aimed to assess the differences in the evaluation and measurements of the liver between independent observers and compare different formulas calculating its volume in relation to volumetric segmentation. Methods: Eight researchers measured standard liver dimensions based on 105 abdominal computed tomography (CT) scans. Based on the results obtained, the volume of the liver was calculated using twelve different methods. An independent observer performed a volumetric segmentation of the livers based on the same CT examinations. Results: Significant differences were found between the formulas and in relation to volumetric segmentation, with the closest results obtained for the Heinemann et al. method. The measurements of individual observers differed significantly from one another. The observers also rated different numbers of livers as enlarged. Conclusions: Due to significant differences, despite its time-consuming nature, the use of volumetric liver segmentation in the daily assessment of liver volume seems to be the most accurate method.
Collapse
Affiliation(s)
- Maciej Cebula
- Individual Medical Practice, 40-754 Katowice, Poland
| | - Angelika Biernacka
- Department of Radiodiagnostics and Invasive Radiology, University Clinical Center Prof. Kornel Gibiński of the Medical University of Silesia in Katowice, 40-752 Katowice, Poland
| | - Oskar Bożek
- Department of Radiodiagnostics, Invasive Radiology and Nuclear Medicine, Faculty of Medical Sciences, Medical University of Silesia, 40-752 Katowice, Poland
- Department of Radiology and Nuclear Medicine, Faculty of Medical Sciences, Medical University of Silesia, 40-752 Katowice, Poland
| | - Bartosz Kokoszka
- Department of Radiodiagnostics, Invasive Radiology and Nuclear Medicine, Faculty of Medical Sciences, Medical University of Silesia, 40-752 Katowice, Poland
- Department of Radiology and Nuclear Medicine, Faculty of Medical Sciences, Medical University of Silesia, 40-752 Katowice, Poland
| | - Sylwia Kazibut
- Department of Radiodiagnostics and Invasive Radiology, University Clinical Center Prof. Kornel Gibiński of the Medical University of Silesia in Katowice, 40-752 Katowice, Poland
| | - Anna Kujszczyk
- Department of Radiodiagnostics, Invasive Radiology and Nuclear Medicine, Faculty of Medical Sciences, Medical University of Silesia, 40-752 Katowice, Poland
- Department of Radiology and Nuclear Medicine, Faculty of Medical Sciences, Medical University of Silesia, 40-752 Katowice, Poland
| | - Monika Kulig-Kulesza
- Department of Radiology and Radiodiagnostics in Zabrze, Medical University of Silesia, 41-800 Katowice, Poland
| | - Sandra Modlińska
- Department of Radiodiagnostics, Invasive Radiology and Nuclear Medicine, Faculty of Medical Sciences, Medical University of Silesia, 40-752 Katowice, Poland
- Department of Radiology and Nuclear Medicine, Faculty of Medical Sciences, Medical University of Silesia, 40-752 Katowice, Poland
| | - Jakub Kufel
- Department of Radiodiagnostics, Invasive Radiology and Nuclear Medicine, Faculty of Medical Sciences, Medical University of Silesia, 40-752 Katowice, Poland
- Department of Radiology and Nuclear Medicine, Faculty of Medical Sciences, Medical University of Silesia, 40-752 Katowice, Poland
| | - Michał Azierski
- Students’ Scientific Association of MedTech, Medical University of Silesia, 40-055 Katowice, Poland
- Students’ Scientific Association of Computer Analysis and Artificial Intelligence, Department of Radiology and Nuclear Medicine, Medical University of Silesia, 40-752 Katowice, Poland
| | - Filip Szydło
- Department of Radiodiagnostics and Invasive Radiology, University Clinical Center Prof. Kornel Gibiński of the Medical University of Silesia in Katowice, 40-752 Katowice, Poland
| | - Mateusz Winder
- Department of Radiodiagnostics, Invasive Radiology and Nuclear Medicine, Faculty of Medical Sciences, Medical University of Silesia, 40-752 Katowice, Poland
- Department of Radiology and Nuclear Medicine, Faculty of Medical Sciences, Medical University of Silesia, 40-752 Katowice, Poland
| | - Joanna Pilch-Kowalczyk
- Department of Radiology and Nuclear Medicine, Faculty of Medical Sciences, Medical University of Silesia, 40-752 Katowice, Poland
| | - Katarzyna Gruszczyńska
- Department of Radiology and Nuclear Medicine, Faculty of Medical Sciences, Medical University of Silesia, 40-752 Katowice, Poland
| |
Collapse
|
5
|
Khan R, Su L, Zaman A, Hassan H, Kang Y, Huang B. Customized m-RCNN and hybrid deep classifier for liver cancer segmentation and classification. Heliyon 2024; 10:e30528. [PMID: 38765046 PMCID: PMC11096931 DOI: 10.1016/j.heliyon.2024.e30528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 04/28/2024] [Accepted: 04/29/2024] [Indexed: 05/21/2024] Open
Abstract
Diagnosing liver disease presents a significant medical challenge in impoverished countries, with over 30 billion individuals succumbing to it each year. Existing models for detecting liver abnormalities suffer from lower accuracy and higher constraint metrics. As a result, there is a pressing need for improved, efficient, and effective liver disease detection methods. To address the limitations of current models, this method introduces a deep liver segmentation and classification system based on a Customized Mask-Region Convolutional Neural Network (cm-RCNN). The process begins with preprocessing the input liver image, which includes Adaptive Histogram Equalization (AHE). AHE helps dehaze the input image, remove color distortion, and apply linear transformations to obtain the preprocessed image. Next, a precise region of interest is segmented from the preprocessed image using a novel deep strategy called cm-RCNN. To enhance segmentation accuracy, the architecture incorporates the ReLU activation function and the modified sigmoid activation function. Subsequently, a variety of features are extracted from the segmented image, including ResNet features, shape features (area, perimeter, approximation, and convex hull), and enhanced median binary pattern. These extracted features are then used to train a hybrid classification model, which incorporates classifiers like SqueezeNet and DeepMaxout models. The final classification outcome is determined by averaging the scores obtained from both classifiers.
Collapse
Affiliation(s)
- Rashid Khan
- College of Applied Sciences, Shenzhen University, Shenzhen, 518060, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen, 518060, China
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, 518188, China
| | - Liyilei Su
- College of Applied Sciences, Shenzhen University, Shenzhen, 518060, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen, 518060, China
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, 518188, China
| | - Asim Zaman
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, 518188, China
| | - Haseeb Hassan
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, 518188, China
| | - Yan Kang
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen, 518060, China
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, 518188, China
| | - Bingding Huang
- College of Applied Sciences, Shenzhen University, Shenzhen, 518060, China
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, 518188, China
| |
Collapse
|
6
|
Umapathy L, Brown T, Mushtaq R, Greenhill M, Lu J, Martin D, Altbach M, Bilgin A. Reducing annotation burden in MR: A novel MR-contrast guided contrastive learning approach for image segmentation. Med Phys 2024; 51:2707-2720. [PMID: 37956263 PMCID: PMC10994772 DOI: 10.1002/mp.16820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 08/24/2023] [Accepted: 10/06/2023] [Indexed: 11/15/2023] Open
Abstract
BACKGROUND Contrastive learning, a successful form of representational learning, has shown promising results in pretraining deep learning (DL) models for downstream tasks. When working with limited annotation data, as in medical image segmentation tasks, learning domain-specific local representations can further improve the performance of DL models. PURPOSE In this work, we extend the contrastive learning framework to utilize domain-specific contrast information from unlabeled Magnetic Resonance (MR) images to improve the performance of downstream MR image segmentation tasks in the presence of limited labeled data. METHODS The contrast in MR images is controlled by underlying tissue properties (e.g., T1 or T2) and image acquisition parameters. We hypothesize that learning to discriminate local representations based on underlying tissue properties should improve subsequent segmentation tasks on MR images. We propose a novel constrained contrastive learning (CCL) strategy that uses tissue-specific information via a constraint map to define positive and negative local neighborhoods for contrastive learning, embedding this information in the representational space during pretraining. For a given MR contrast image, the proposed strategy uses local signal characteristics (constraint map) across a set of related multi-contrast MR images as a surrogate for underlying tissue information. We demonstrate the utility of the approach for downstream: (1) multi-organ segmentation tasks in T2-weighted images where a DL model learns T2 information with constraint maps from a set of 2D multi-echo T2-weighted images (n = 101) and (2) tumor segmentation tasks in multi-parametric images from the public brain tumor segmentation (BraTS) (n = 80) dataset where DL models learn T1 and T2 information from multi-parametric BraTS images. Performance is evaluated on downstream multi-label segmentation tasks with limited data in (1) T2-weighted images of the abdomen from an in-house Radial-T2 (Train/Test = 30/20), (2) public Cartesian-T2 (Train/Test = 6/12) dataset, and (3) multi-parametric MR images from the public brain tumor segmentation dataset (BraTS) (Train/Test = 40/50). The performance of the proposed CCL strategy is compared to state-of-the-art self-supervised contrastive learning techniques. In each task, a model is also trained using all available labeled data for supervised baseline performance. RESULTS The proposed CCL strategy consistently yielded improved Dice scores, Precision, and Recall metrics, and reduced HD95 values across all segmentation tasks. We also observed performance comparable to the baseline with reduced annotation effort. The t-SNE visualization of features for T2-weighted images demonstrates its ability to embed T2 information in the representational space. On the BraTS dataset, we also observed that using an appropriate multi-contrast space to learn T1+T2, T1, or T2 information during pretraining further improved the performance of tumor segmentation tasks. CONCLUSIONS Learning to embed tissue-specific information that controls MR image contrast with the proposed constrained contrastive learning improved the performance of DL models on subsequent segmentation tasks compared to conventional self-supervised contrastive learning techniques. The use of such domain-specific local representations could help understand, improve performance, and mitigate the scarcity of labeled data in MR image segmentation tasks.
Collapse
Affiliation(s)
- Lavanya Umapathy
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, United States
- Department of Medical Imaging, University of Arizona, Tucson, AZ, United States
- Department of Radiology, Center for Advanced Imaging Innovation and Research (CAI2R), New York University Grossman School of Medicine, New York, NY, United States
| | - Taylor Brown
- Department of Medical Imaging, University of Arizona, Tucson, AZ, United States
- College of Medicine, University of Arizona, Tucson, AZ, United States
| | - Raza Mushtaq
- Department of Medical Imaging, University of Arizona, Tucson, AZ, United States
- College of Medicine, University of Arizona, Tucson, AZ, United States
| | - Mark Greenhill
- Department of Medical Imaging, University of Arizona, Tucson, AZ, United States
- College of Medicine, University of Arizona, Tucson, AZ, United States
| | - J’rick Lu
- Department of Medical Imaging, University of Arizona, Tucson, AZ, United States
- College of Medicine, University of Arizona, Tucson, AZ, United States
| | - Diego Martin
- Department of Radiology, Houston Methodist Hospital, Houston, TX, United States
| | - Maria Altbach
- Department of Medical Imaging, University of Arizona, Tucson, AZ, United States
| | - Ali Bilgin
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, United States
- Department of Medical Imaging, University of Arizona, Tucson, AZ, United States
- Program in Applied Mathematics, University of Arizona, Tucson, AZ, United States
- Department of Biomedical Engineering, University of Arizona, Tucson, AZ, United States
| |
Collapse
|
7
|
Pei C, Wu F, Yang M, Pan L, Ding W, Dong J, Huang L, Zhuang X. Multi-Source Domain Adaptation for Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1640-1651. [PMID: 38133966 DOI: 10.1109/tmi.2023.3346285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2023]
Abstract
Unsupervised domain adaptation(UDA) aims to mitigate the performance drop of models tested on the target domain, due to the domain shift from the target to sources. Most UDA segmentation methods focus on the scenario of solely single source domain. However, in practical situations data with gold standard could be available from multiple sources (domains), and the multi-source training data could provide more information for knowledge transfer. How to utilize them to achieve better domain adaptation yet remains to be further explored. This work investigates multi-source UDA and proposes a new framework for medical image segmentation. Firstly, we employ a multi-level adversarial learning scheme to adapt features at different levels between each of the source domains and the target, to improve the segmentation performance. Then, we propose a multi-model consistency loss to transfer the learned multi-source knowledge to the target domain simultaneously. Finally, we validated the proposed framework on two applications, i.e., multi-modality cardiac segmentation and cross-modality liver segmentation. The results showed our method delivered promising performance and compared favorably to state-of-the-art approaches.
Collapse
|
8
|
Goldaracena N, Vargas PA, McCormack L. Pre-operative assessment of living liver donors' liver anatomy and volumes. Updates Surg 2024:10.1007/s13304-024-01806-6. [PMID: 38526699 DOI: 10.1007/s13304-024-01806-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 02/24/2024] [Indexed: 03/27/2024]
Abstract
Decades of experience supports LDLT as a favorable strategy to reduce waitlist mortality. The multiple regenerative pathways of hepatocytes and other hepatic cells justify the rationale behind it. Nonetheless, living liver donation is still underused and its broader implementation is challenging, mostly due to variability in practices leading to concerns related to donor safety. A non-systematic literature search was conducted for peer-reviewed original articles related to pre-operative evaluation of living liver donor candidates. Eligible studies were synthesized upon consensus for discussion in this up-to-date review. Review of the literature demonstrate that the importance of preoperative assessment of vascular, biliary and liver volume to ensure donor safety and adequate surgical planning for graft procurement is widely recognized. Moreover, data indicates that anatomic variants in vascular and biliary systems in healthy donors are common, present in up to 50% of the population. Therefore, comprehensive mapping and visualizations of each component is needed. Different imaging modalities are reported across practices and are discussed in detail. Lastly, assessment of liver volume must take into account several technical and donor factors that increase the chances of errors in volume estimation, which occurs in up to 10% of the cases. Experience suggests that maximizing donor safety and lessening their risks is a result of integrated experience between hepatobiliary and transplant surgery, along with multidisciplinary efforts in performing a comprehensive pre-operative donor assessment. Although technical advances have increased the accuracy of volume estimation, over- or under-estimation remains a challenge that needs further attention.
Collapse
Affiliation(s)
- Nicolas Goldaracena
- Department of Surgery, Division of Transplantation, University of Virginia Health System, 1215 Lee Street, PO Box 800709, Charlottesville, VA, 22908-0709, USA.
| | - Paola A Vargas
- Department of Surgery, Division of Transplantation, University of Virginia Health System, 1215 Lee Street, PO Box 800709, Charlottesville, VA, 22908-0709, USA
| | - Lucas McCormack
- Transplant Unit, Hospital Aleman de Buenos Aires, Buenos Aires, Argentina
| |
Collapse
|
9
|
Al-Bahou R, Bruner J, Moore H, Zarrinpar A. Quantitative methods for optimizing patient outcomes in liver transplantation. Liver Transpl 2024; 30:311-320. [PMID: 38153309 PMCID: PMC10932841 DOI: 10.1097/lvt.0000000000000325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 12/11/2023] [Indexed: 12/29/2023]
Abstract
Liver transplantation (LT) is a lifesaving yet complex intervention with considerable challenges impacting graft and patient outcomes. Despite best practices, 5-year graft survival is only 70%. Sophisticated quantitative techniques offer potential solutions by assimilating multifaceted data into insights exceeding human cognition. Optimizing donor-recipient matching and graft allocation presents additional intricacies, involving the integration of clinical and laboratory data to select the ideal donor and recipient pair. Allocation must balance physiological variables with geographical and logistical constraints and timing. Quantitative methods can integrate these complex factors to optimize graft utilization. Such methods can also aid in personalizing treatment regimens, drawing on both pretransplant and posttransplant data, possibly using continuous immunological monitoring to enable early detection of graft injury or infected states. Advanced analytics is thus poised to transform management in LT, maximizing graft and patient survival. In this review, we describe quantitative methods applied to organ transplantation, with a focus on LT. These include quantitative methods for (1) utilizing and allocating donor organs equitably and optimally, (2) improving surgical planning through preoperative imaging, (3) monitoring graft and immune status, (4) determining immunosuppressant doses, and (5) establishing and maintaining the health of graft and patient after LT.
Collapse
Affiliation(s)
- Raja Al-Bahou
- Department of Surgery, University of Florida College of Medicine, Gainesville, Florida, USA
| | - Julia Bruner
- Department of Surgery, University of Florida College of Medicine, Gainesville, Florida, USA
| | - Helen Moore
- Department of Medicine, University of Florida College of Medicine, Gainesville, Florida, USA
| | - Ali Zarrinpar
- Department of Surgery, University of Florida College of Medicine, Gainesville, Florida, USA
| |
Collapse
|
10
|
Gupta AC, Cazoulat G, Al Taie M, Yedururi S, Rigaud B, Castelo A, Wood J, Yu C, O'Connor C, Salem U, Silva JAM, Jones AK, McCulloch M, Odisio BC, Koay EJ, Brock KK. Fully automated deep learning based auto-contouring of liver segments and spleen on contrast-enhanced CT images. Sci Rep 2024; 14:4678. [PMID: 38409252 PMCID: PMC10967337 DOI: 10.1038/s41598-024-53997-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 02/07/2024] [Indexed: 02/28/2024] Open
Abstract
Manual delineation of liver segments on computed tomography (CT) images for primary/secondary liver cancer (LC) patients is time-intensive and prone to inter/intra-observer variability. Therefore, we developed a deep-learning-based model to auto-contour liver segments and spleen on contrast-enhanced CT (CECT) images. We trained two models using 3d patch-based attention U-Net ([Formula: see text] and 3d full resolution of nnU-Net ([Formula: see text] to determine the best architecture ([Formula: see text]. BA was used with vessels ([Formula: see text] and spleen ([Formula: see text] to assess the impact on segment contouring. Models were trained, validated, and tested on 160 ([Formula: see text]), 40 ([Formula: see text]), 33 ([Formula: see text]), 25 (CCH) and 20 (CPVE) CECT of LC patients. [Formula: see text] outperformed [Formula: see text] across all segments with median differences in Dice similarity coefficients (DSC) ranging 0.03-0.05 (p < 0.05). [Formula: see text], and [Formula: see text] were not statistically different (p > 0.05), however, both were slightly better than [Formula: see text] by DSC up to 0.02. The final model, [Formula: see text], showed a mean DSC of 0.89, 0.82, 0.88, 0.87, 0.96, and 0.95 for segments 1, 2, 3, 4, 5-8, and spleen, respectively on entire test sets. Qualitatively, more than 85% of cases showed a Likert score [Formula: see text] 3 on test sets. Our final model provides clinically acceptable contours of liver segments and spleen which are usable in treatment planning.
Collapse
Affiliation(s)
- Aashish C Gupta
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA.
- The University of Texas MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, TX, USA.
| | - Guillaume Cazoulat
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Mais Al Taie
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Sireesha Yedururi
- Abdominal Imaging Department, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Bastien Rigaud
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Austin Castelo
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - John Wood
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Cenji Yu
- The University of Texas MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, TX, USA
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Caleb O'Connor
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Usama Salem
- Abdominal Imaging Department, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | | | - Aaron Kyle Jones
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- The University of Texas MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Molly McCulloch
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Bruno C Odisio
- Department of Interventional Radiology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Eugene J Koay
- The University of Texas MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, TX, USA
- Department of Gastrointestinal Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Kristy K Brock
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA.
- The University of Texas MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, TX, USA.
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA.
| |
Collapse
|
11
|
Sanchez-Garcia J, Lopez-Verdugo F, Shorti R, Krong J, Kastenberg ZJ, Walters S, Gagnon A, Paci P, Zendejas I, Alonso D, Fujita S, Contreras AG, Botha J, Esquivel CO, Rodriguez-Davalos MI. Three-dimensional Liver Model Application for Liver Transplantation. Transplantation 2024; 108:464-472. [PMID: 38259179 DOI: 10.1097/tp.0000000000004730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
BACKGROUND Children are removed from the liver transplant waitlist because of death or progressive illness. Size mismatch accounts for 30% of organ refusal. This study aimed to demonstrate that 3-dimensional (3D) technology is a feasible and accurate adjunct to organ allocation and living donor selection process. METHODS This prospective multicenter study included pediatric liver transplant candidates and living donors from January 2020 to February 2023. Patient-specific, 3D-printed liver models were used for anatomic planning, real-time evaluation during organ procurement, and surgical navigation. The primary outcome was to determine model accuracy. The secondary outcome was to determine the impact of outcomes in living donor hepatectomy. Study groups were analyzed using propensity score matching with a retrospective cohort. RESULTS Twenty-eight recipients were included. The median percentage error was -0.6% for 3D models and had the highest correlation to the actual liver explant (Pearson's R = 0.96, P < 0.001) compared with other volume calculation methods. Patient and graft survival were comparable. From 41 living donors, the median percentage error of the allograft was 12.4%. The donor-matched study group had lower central line utilization (21.4% versus 75%, P = 0.045), shorter length of stay (4 versus 7 d, P = 0.003), and lower mean comprehensive complication index (3 versus 21, P = 0.014). CONCLUSIONS Three-dimensional volume is highly correlated with actual liver explant volume and may vary across different allografts for living donation. The addition of 3D-printed liver models during the transplant evaluation and organ procurement process is a feasible and safe adjunct to the perioperative decision-making process.
Collapse
Affiliation(s)
- Jorge Sanchez-Garcia
- Liver Center, Intermountain Primary Children's Hospital, Salt Lake City, UT
- Abdominal Transplant Service, Intermountain Medical Center, Murray, UT
| | - Fidel Lopez-Verdugo
- Liver Center, Intermountain Primary Children's Hospital, Salt Lake City, UT
- Abdominal Transplant Service, Intermountain Medical Center, Murray, UT
| | - Rami Shorti
- Emerging Technologies, Intermountain Health, Murray, UT
| | - Jake Krong
- Transplant Research Department, Intermountain Medical Center, Murray, UT
| | - Zachary J Kastenberg
- Liver Center, Intermountain Primary Children's Hospital, Salt Lake City, UT
- Division of Pediatric Surgery, University of Utah School of Medicine, Salt Lake City, UT
| | - Shannon Walters
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| | - Andrew Gagnon
- Abdominal Transplant Service, Intermountain Medical Center, Murray, UT
| | - Philippe Paci
- Abdominal Transplant Service, Intermountain Medical Center, Murray, UT
| | - Ivan Zendejas
- Abdominal Transplant Service, Intermountain Medical Center, Murray, UT
| | - Diane Alonso
- Abdominal Transplant Service, Intermountain Medical Center, Murray, UT
| | - Shiro Fujita
- Liver Center, Intermountain Primary Children's Hospital, Salt Lake City, UT
- Abdominal Transplant Service, Intermountain Medical Center, Murray, UT
| | - Alan G Contreras
- Liver Center, Intermountain Primary Children's Hospital, Salt Lake City, UT
- Abdominal Transplant Service, Intermountain Medical Center, Murray, UT
| | - Jean Botha
- Liver Center, Intermountain Primary Children's Hospital, Salt Lake City, UT
- Abdominal Transplant Service, Intermountain Medical Center, Murray, UT
| | - Carlos O Esquivel
- Division of Abdominal Transplantation, Lucile Packard Children's Hospital, Stanford University School of Medicine, Stanford, CA
| | - Manuel I Rodriguez-Davalos
- Liver Center, Intermountain Primary Children's Hospital, Salt Lake City, UT
- Division of Transplant Surgery, University of Utah School of Medicine, Salt Lake City, UT
| |
Collapse
|
12
|
Küçükçiloğlu Y, Şekeroğlu B, Adalı T, Şentürk N. Prediction of osteoporosis using MRI and CT scans with unimodal and multimodal deep-learning models. Diagn Interv Radiol 2024; 30:9-20. [PMID: 37309886 PMCID: PMC10773174 DOI: 10.4274/dir.2023.232116] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Accepted: 05/06/2023] [Indexed: 06/14/2023]
Abstract
PURPOSE Osteoporosis is the systematic degeneration of the human skeleton, with consequences ranging from a reduced quality of life to mortality. Therefore, the prediction of osteoporosis reduces risks and supports patients in taking precautions. Deep-learning and specific models achieve highly accurate results using different imaging modalities. The primary purpose of this research was to develop unimodal and multimodal deep-learning-based diagnostic models to predict bone mineral loss of the lumbar vertebrae using magnetic resonance (MR) and computed tomography (CT) imaging. METHODS Patients who received both lumbar dual-energy X-ray absorptiometry (DEXA) and MRI (n = 120) or CT (n = 100) examinations were included in this study. Unimodal and multimodal convolutional neural networks (CNNs) with dual blocks were proposed to predict osteoporosis using lumbar vertebrae MR and CT examinations in separate and combined datasets. Bone mineral density values obtained by DEXA were used as reference data. The proposed models were compared with a CNN model and six benchmark pre-trained deep-learning models. RESULTS The proposed unimodal model obtained 96.54%, 98.84%, and 96.76% balanced accuracy for MRI, CT, and combined datasets, respectively, while the multimodal model achieved 98.90% balanced accuracy in 5-fold cross-validation experiments. Furthermore, the models obtained 95.68%-97.91% accuracy with a hold-out validation dataset. In addition, comparative experiments demonstrated that the proposed models yielded superior results by providing more effective feature extraction in dual blocks to predict osteoporosis. CONCLUSION This study demonstrated that osteoporosis was accurately predicted by the proposed models using both MR and CT images, and a multimodal approach improved the prediction of osteoporosis. With further research involving prospective studies with a larger number of patients, there may be an opportunity to implement these technologies into clinical practice.
Collapse
Affiliation(s)
- Yasemin Küçükçiloğlu
- Near East University Faculty of Medicine, Department of Radiology, Nicosia, Cyprus
- Near East University, Center of Excellence, Tissue Engineering and Biomaterials Research Center, Nicosia, Cyprus
| | - Boran Şekeroğlu
- Near East University, Applied Artificial Intelligence Research Center, Nicosia, Cyprus
| | - Terin Adalı
- Near East University, Center of Excellence, Tissue Engineering and Biomaterials Research Center, Nicosia, Cyprus
- Near East University Faculty of Engineering, Department of Biomedical Engineering, Nicosia, Cyprus
- Sabancı University, Nanotechnology Research and Application Center, İstanbul, Turkey
| | - Niyazi Şentürk
- Near East University, Center of Excellence, Tissue Engineering and Biomaterials Research Center, Nicosia, Cyprus
- Near East University Faculty of Engineering, Department of Biomedical Engineering, Nicosia, Cyprus
| |
Collapse
|
13
|
Li S, Feng Y, Xu H, Miao Y, Lin Z, Liu H, Xu Y, Li F. CAENet: Contrast adaptively enhanced network for medical image segmentation based on a differentiable pooling function. Comput Biol Med 2023; 167:107578. [PMID: 37918260 DOI: 10.1016/j.compbiomed.2023.107578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2023] [Revised: 10/03/2023] [Accepted: 10/15/2023] [Indexed: 11/04/2023]
Abstract
Pixel differences between classes with low contrast in medical image semantic segmentation tasks often lead to confusion in category classification, posing a typical challenge for recognition of small targets. To address this challenge, we propose a Contrastive Adaptive Augmented Semantic Segmentation Network with a differentiable pooling function. Firstly, an Adaptive Contrast Augmentation module is constructed to automatically extract local high-frequency information, thereby enhancing image details and accentuating the differences between classes. Subsequently, the Frequency-Efficient Channel Attention mechanism is designed to select useful features in the encoding phase, where multifrequency information is employed to extract channel features. One-dimensional convolutional cross-channel interactions are adopted to reduce model complexity. Finally, a differentiable approximation of max pooling is introduced in order to replace standard max pooling, strengthening the connectivity between neurons and reducing information loss caused by downsampling. We evaluated the effectiveness of our proposed method through several ablation experiments and comparison experiments under homogeneous conditions. The experimental results demonstrate that our method competes favorably with other state-of-the-art networks on five medical image datasets, including four public medical image datasets and one clinical image dataset. It can be effectively applied to medical image segmentation.
Collapse
Affiliation(s)
- Shengke Li
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, Guangdong, China; School of Engineering, Guangzhou College of Technology and Business, Foshan, 528100, Guangdong, China
| | - Yue Feng
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, Guangdong, China.
| | - Hong Xu
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, Guangdong, China; Victoria University, Melbourne, 8001, Australia
| | - Yuan Miao
- Victoria University, Melbourne, 8001, Australia
| | - Zhuosheng Lin
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, Guangdong, China
| | - Huilin Liu
- Basic Medical College, Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China
| | - Ying Xu
- Laboratory of TCM Four Processing, Shanghai University of TCM, Shanghai, 201203, China
| | - Fufeng Li
- Laboratory of TCM Four Processing, Shanghai University of TCM, Shanghai, 201203, China.
| |
Collapse
|
14
|
Yao H, Tian L, Liu X, Li S, Chen Y, Cao J, Zhang Z, Chen Z, Feng Z, Xu Q, Zhu J, Wang Y, Guo Y, Chen W, Li C, Li P, Wang H, Luo J. Development and external validation of the multichannel deep learning model based on unenhanced CT for differentiating fat-poor angiomyolipoma from renal cell carcinoma: a two-center retrospective study. J Cancer Res Clin Oncol 2023; 149:15827-15838. [PMID: 37672075 PMCID: PMC10620299 DOI: 10.1007/s00432-023-05339-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 08/24/2023] [Indexed: 09/07/2023]
Abstract
PURPOSE There are undetectable levels of fat in fat-poor angiomyolipoma. Thus, it is often misdiagnosed as renal cell carcinoma. We aimed to develop and evaluate a multichannel deep learning model for differentiating fat-poor angiomyolipoma (fp-AML) from renal cell carcinoma (RCC). METHODS This two-center retrospective study included 320 patients from the First Affiliated Hospital of Sun Yat-Sen University (FAHSYSU) and 132 patients from the Sun Yat-Sen University Cancer Center (SYSUCC). Data from patients at FAHSYSU were divided into a development dataset (n = 267) and a hold-out dataset (n = 53). The development dataset was used to obtain the optimal combination of CT modality and input channel. The hold-out dataset and SYSUCC dataset were used for independent internal and external validation, respectively. RESULTS In the development phase, models trained on unenhanced CT images performed significantly better than those trained on enhanced CT images based on the fivefold cross-validation. The best patient-level performance, with an average area under the receiver operating characteristic curve (AUC) of 0.951 ± 0.026 (mean ± SD), was achieved using the "unenhanced CT and 7-channel" model, which was finally selected as the optimal model. In the independent internal and external validation, AUCs of 0.966 (95% CI 0.919-1.000) and 0.898 (95% CI 0.824-0.972), respectively, were obtained using the optimal model. In addition, the performance of this model was better on large tumors (≥ 40 mm) in both internal and external validation. CONCLUSION The promising results suggest that our multichannel deep learning classifier based on unenhanced whole-tumor CT images is a highly useful tool for differentiating fp-AML from RCC.
Collapse
Affiliation(s)
- Haohua Yao
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
- Department of Urology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Li Tian
- Department of Medical Imaging, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Xi Liu
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Shurong Li
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Yuhang Chen
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Jiazheng Cao
- Department of Urology, Jiangmen Central Hospital, Jiangmen, China
| | - Zhiling Zhang
- Department of Urology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Zhenhua Chen
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Zihao Feng
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Quanhui Xu
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Jiangquan Zhu
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Yinghan Wang
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Yan Guo
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Wei Chen
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Caixia Li
- School of Mathematics and Computational Science, Sun Yat-Sen University, Guangzhou, China
| | - Peixing Li
- School of Mathematics and Computational Science, Sun Yat-Sen University, Guangzhou, China
| | - Huanjun Wang
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China.
| | - Junhang Luo
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China.
| |
Collapse
|
15
|
Radiya K, Joakimsen HL, Mikalsen KØ, Aahlin EK, Lindsetmo RO, Mortensen KE. Performance and clinical applicability of machine learning in liver computed tomography imaging: a systematic review. Eur Radiol 2023; 33:6689-6717. [PMID: 37171491 PMCID: PMC10511359 DOI: 10.1007/s00330-023-09609-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 02/02/2023] [Accepted: 02/06/2023] [Indexed: 05/13/2023]
Abstract
OBJECTIVES Machine learning (ML) for medical imaging is emerging for several organs and image modalities. Our objectives were to provide clinicians with an overview of this field by answering the following questions: (1) How is ML applied in liver computed tomography (CT) imaging? (2) How well do ML systems perform in liver CT imaging? (3) What are the clinical applications of ML in liver CT imaging? METHODS A systematic review was carried out according to the guidelines from the PRISMA-P statement. The search string focused on studies containing content relating to artificial intelligence, liver, and computed tomography. RESULTS One hundred ninety-one studies were included in the study. ML was applied to CT liver imaging by image analysis without clinicians' intervention in majority of studies while in newer studies the fusion of ML method with clinical intervention have been identified. Several were documented to perform very accurately on reliable but small data. Most models identified were deep learning-based, mainly using convolutional neural networks. Potentially many clinical applications of ML to CT liver imaging have been identified through our review including liver and its lesion segmentation and classification, segmentation of vascular structure inside the liver, fibrosis and cirrhosis staging, metastasis prediction, and evaluation of chemotherapy. CONCLUSION Several studies attempted to provide transparent result of the model. To make the model convenient for a clinical application, prospective clinical validation studies are in urgent call. Computer scientists and engineers should seek to cooperate with health professionals to ensure this. KEY POINTS • ML shows great potential for CT liver image tasks such as pixel-wise segmentation and classification of liver and liver lesions, fibrosis staging, metastasis prediction, and retrieval of relevant liver lesions from similar cases of other patients. • Despite presenting the result is not standardized, many studies have attempted to provide transparent results to interpret the machine learning method performance in the literature. • Prospective studies are in urgent call for clinical validation of ML method, preferably carried out by cooperation between clinicians and computer scientists.
Collapse
Affiliation(s)
- Keyur Radiya
- Department of Gastroenterological Surgery at University Hospital of North Norway (UNN), Tromso, Norway.
- Department of Clinical Medicine, UiT The Arctic University of Norway, Tromso, Norway.
| | - Henrik Lykke Joakimsen
- Institute of Clinical Medicine, UiT The Arctic University of Norway, Tromso, Norway
- Centre for Clinical Artificial Intelligence (SPKI), University Hospital of North Norway, Tromso, Norway
| | - Karl Øyvind Mikalsen
- Department of Clinical Medicine, UiT The Arctic University of Norway, Tromso, Norway
- Centre for Clinical Artificial Intelligence (SPKI), University Hospital of North Norway, Tromso, Norway
- UiT Machine Learning Group, Department of Physics and Technology, UiT the Arctic University of Norway, Tromso, Norway
| | - Eirik Kjus Aahlin
- Department of Gastroenterological Surgery at University Hospital of North Norway (UNN), Tromso, Norway
| | - Rolv-Ole Lindsetmo
- Department of Clinical Medicine, UiT The Arctic University of Norway, Tromso, Norway
- Head Clinic of Surgery, Oncology and Women Health, University Hospital of North Norway, Tromso, Norway
| | - Kim Erlend Mortensen
- Department of Gastroenterological Surgery at University Hospital of North Norway (UNN), Tromso, Norway
- Department of Clinical Medicine, UiT The Arctic University of Norway, Tromso, Norway
| |
Collapse
|
16
|
Zhen T, Fang J, Hu D, Ruan M, Wang L, Fan S, Shen Q. Risk stratification by nomogram of deep learning radiomics based on multiparametric magnetic resonance imaging in knee meniscus injury. INTERNATIONAL ORTHOPAEDICS 2023; 47:2497-2505. [PMID: 37386277 DOI: 10.1007/s00264-023-05875-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 06/19/2023] [Indexed: 07/01/2023]
Abstract
PURPOSE To construct and validate a nomogram model that integrated deep learning radiomic features based on multiparametric MRI and clinical features for risk stratification of meniscus injury. METHODS A total of 167 knee MR images were collected from two institutions. All patients were classified into two groups based on the MR diagnostic criteria proposed by Stoller et al. The automatic meniscus segmentation model was constructed through V-net. LASSO regression was performed to extract the optimal features correlated to risk stratification. A nomogram model was constructed by combining the Radscore and clinical features. The performance of the models was evaluated by ROC analysis and calibration curve. Subsequently, the model was simulated by junior doctors in order to test its practical application effect. RESULTS The Dice similarity coefficients of automatic meniscus segmentation models were all over 0.8. Eight optimal features, identified by LASSO regression, were employed to calculate the Radscore. The combined model showed a better performance in both the training cohort (AUC = 0.90, 95%CI: 0.84-0.95) and the validation cohort (AUC = 0.84, 95%CI: 0.72-0.93). The calibration curve indicated a better accuracy of the combined model than either the Radscore or clinical model alone. The simulation results showed that the diagnostic accuracy of junior doctors increased from 74.9 to 86.2% after using the model. CONCLUSION Deep learning V-net demonstrated great performance in automatic meniscus segmentation of the knee joint. It was reliable for stratifying the risk of meniscus injury of the knee by nomogram which integrated the Radscores and clinical features.
Collapse
Affiliation(s)
- Tao Zhen
- Department of Radiology, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, No. 261, Huansha Road, Zhejiang, 310006, Hangzhou, China
| | - Jing Fang
- Zhejiang Provincial Hospital of Chinese Medicine, Hangzhou, 310006, China
| | - Dacheng Hu
- Department of Radiology, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, No. 261, Huansha Road, Zhejiang, 310006, Hangzhou, China
| | - Mei Ruan
- Department of Radiology, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, No. 261, Huansha Road, Zhejiang, 310006, Hangzhou, China
| | - Luoyu Wang
- Department of Radiology, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, No. 261, Huansha Road, Zhejiang, 310006, Hangzhou, China
| | - Sandra Fan
- Department of Radiology, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, No. 261, Huansha Road, Zhejiang, 310006, Hangzhou, China
| | - Qijun Shen
- Department of Radiology, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, No. 261, Huansha Road, Zhejiang, 310006, Hangzhou, China.
| |
Collapse
|
17
|
Hansen S, Gautam S, Salahuddin SA, Kampffmeyer M, Jenssen R. ADNet++: A few-shot learning framework for multi-class medical image volume segmentation with uncertainty-guided feature refinement. Med Image Anal 2023; 89:102870. [PMID: 37541101 DOI: 10.1016/j.media.2023.102870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Revised: 04/23/2023] [Accepted: 06/12/2023] [Indexed: 08/06/2023]
Abstract
A major barrier to applying deep segmentation models in the medical domain is their typical data-hungry nature, requiring experts to collect and label large amounts of data for training. As a reaction, prototypical few-shot segmentation (FSS) models have recently gained traction as data-efficient alternatives. Nevertheless, despite the recent progress of these models, they still have some essential shortcomings that must be addressed. In this work, we focus on three of these shortcomings: (i) the lack of uncertainty estimation, (ii) the lack of a guiding mechanism to help locate edges and encourage spatial consistency in the segmentation maps, and (iii) the models' inability to do one-step multi-class segmentation. Without modifying or requiring a specific backbone architecture, we propose a modified prototype extraction module that facilitates the computation of uncertainty maps in prototypical FSS models, and show that the resulting maps are useful indicators of the model uncertainty. To improve the segmentation around boundaries and to encourage spatial consistency, we propose a novel feature refinement module that leverages structural information in the input space to help guide the segmentation in the feature space. Furthermore, we demonstrate how uncertainty maps can be used to automatically guide this feature refinement. Finally, to avoid ambiguous voxel predictions that occur when images are segmented class-by-class, we propose a procedure to perform one-step multi-class FSS. The efficiency of our proposed methodology is evaluated on two representative datasets for abdominal organ segmentation (CHAOS dataset and BTCV dataset) and one dataset for cardiac segmentation (MS-CMRSeg dataset). The results show that our proposed methodology significantly (one-sided Wilcoxon signed rank test, p<0.05) improves the baseline, increasing the overall dice score with +5.2, +5.1, and +2.8 percentage points for the CHAOS dataset, the BTCV dataset, and the MS-CMRSeg dataset, respectively.
Collapse
Affiliation(s)
- Stine Hansen
- Department of Physics and Technology, UiT The Arctic University of Norway, NO-9037 Tromsø, Norway.
| | - Srishti Gautam
- Department of Physics and Technology, UiT The Arctic University of Norway, NO-9037 Tromsø, Norway
| | - Suaiba Amina Salahuddin
- Department of Physics and Technology, UiT The Arctic University of Norway, NO-9037 Tromsø, Norway
| | - Michael Kampffmeyer
- Department of Physics and Technology, UiT The Arctic University of Norway, NO-9037 Tromsø, Norway
| | - Robert Jenssen
- Department of Physics and Technology, UiT The Arctic University of Norway, NO-9037 Tromsø, Norway
| |
Collapse
|
18
|
Midya A, Chakraborty J, Srouji R, Narayan RR, Boerner T, Zheng J, Pak LM, Creasy JM, Escobar LA, Harrington KA, Gonen M, D'Angelica MI, Kingham TP, Do RKG, Jarnagin WR, Simpson AL. Computerized Diagnosis of Liver Tumors From CT Scans Using a Deep Neural Network Approach. IEEE J Biomed Health Inform 2023; 27:2456-2464. [PMID: 37027632 PMCID: PMC10245221 DOI: 10.1109/jbhi.2023.3248489] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
The liver is a frequent site of benign and malignant, primary and metastatic tumors. Hepatocellular carcinoma (HCC) and intrahepatic cholangiocarcinoma (ICC) are the most common primary liver cancers, and colorectal liver metastasis (CRLM) is the most common secondary liver cancer. Although the imaging characteristic of these tumors is central to optimal clinical management, it relies on imaging features that are often non-specific, overlap, and are subject to inter-observer variability. Thus, in this study, we aimed to categorize liver tumors automatically from CT scans using a deep learning approach that objectively extracts discriminating features not visible to the naked eye. Specifically, we used a modified Inception v3 network-based classification model to classify HCC, ICC, CRLM, and benign tumors from pretreatment portal venous phase computed tomography (CT) scans. Using a multi-institutional dataset of 814 patients, this method achieved an overall accuracy rate of 96%, with sensitivity rates of 96%, 94%, 99%, and 86% for HCC, ICC, CRLM, and benign tumors, respectively, using an independent dataset. These results demonstrate the feasibility of the proposed computer-assisted system as a novel non-invasive diagnostic tool to classify the most common liver tumors objectively.
Collapse
|
19
|
Zhong L, Huang P, Shu H, Li Y, Zhang Y, Feng Q, Wu Y, Yang W. United multi-task learning for abdominal contrast-enhanced CT synthesis through joint deformable registration. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107391. [PMID: 36804266 DOI: 10.1016/j.cmpb.2023.107391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 12/13/2022] [Accepted: 01/30/2023] [Indexed: 06/18/2023]
Abstract
Synthesizing abdominal contrast-enhanced computed tomography (CECT) images from non-enhanced CT (NECT) images is of great importance, in the delineation of radiotherapy target volumes, to reduce the risk of iodinated contrast agent and the registration error between NECT and CECT for transferring the delineations. NECT images contain structural information that can reflect the contrast difference between lesions and surrounding tissues. However, existing methods treat synthesis and registration as two separate tasks, which neglects the task collaborative and fails to address misalignment between images after the standard image pre-processing in training a CECT synthesis model. Thus, we propose an united multi-task learning (UMTL) for joint synthesis and deformable registration of abdominal CECT. Specifically, our UMTL is an end-to-end multi-task framework, which integrates a deformation field learning network for reducing the misalignment errors and a 3D generator for synthesizing CECT images. Furthermore, the learning of enhanced component images and the multi-loss function are adopted for enhancing the performance of synthetic CECT images. The proposed method is evaluated on two different resolution datasets and a separate test dataset from another center. The synthetic venous phase CECT images of the separate test dataset yield mean absolute error (MAE) of 32.78±7.27 HU, mean MAE of 24.15±5.12 HU on liver region, mean peak signal-to-noise rate (PSNR) of 27.59±2.45 dB, and mean structural similarity (SSIM) of 0.96±0.01. The Dice similarity coefficients of liver region between the true and synthetic venous phase CECT images are 0.96±0.05 (high-resolution) and 0.95±0.07 (low-resolution), respectively. The proposed method has great potential in aiding the delineation of radiotherapy target volumes.
Collapse
Affiliation(s)
- Liming Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China
| | - Pinyu Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China
| | - Hai Shu
- Department of Biostatistics, School of Global Public Health, New York University, New York, NY, 10003, United States
| | - Yin Li
- Department of Information, the Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou 510515, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China
| | - Yuankui Wu
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China.
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China.
| |
Collapse
|
20
|
Li J, Chen J, Tang Y, Wang C, Landman BA, Zhou SK. Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives. Med Image Anal 2023; 85:102762. [PMID: 36738650 PMCID: PMC10010286 DOI: 10.1016/j.media.2023.102762] [Citation(s) in RCA: 54] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 01/18/2023] [Accepted: 01/27/2023] [Indexed: 02/01/2023]
Abstract
Transformer, one of the latest technological advances of deep learning, has gained prevalence in natural language processing or computer vision. Since medical imaging bear some resemblance to computer vision, it is natural to inquire about the status quo of Transformers in medical imaging and ask the question: can the Transformer models transform medical imaging? In this paper, we attempt to make a response to the inquiry. After a brief introduction of the fundamentals of Transformers, especially in comparison with convolutional neural networks (CNNs), and highlighting key defining properties that characterize the Transformers, we offer a comprehensive review of the state-of-the-art Transformer-based approaches for medical imaging and exhibit current research progresses made in the areas of medical image segmentation, recognition, detection, registration, reconstruction, enhancement, etc. In particular, what distinguishes our review lies in its organization based on the Transformer's key defining properties, which are mostly derived from comparing the Transformer and CNN, and its type of architecture, which specifies the manner in which the Transformer and CNN are combined, all helping the readers to best understand the rationale behind the reviewed approaches. We conclude with discussions of future perspectives.
Collapse
Affiliation(s)
- Jun Li
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Junyu Chen
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, USA
| | - Yucheng Tang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Ce Wang
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Bennett A Landman
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - S Kevin Zhou
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China; School of Biomedical Engineering & Suzhou Institute for Advanced Research, Center for Medical Imaging, Robotics, and Analytic Computing & Learning (MIRACLE), University of Science and Technology of China, Suzhou 215123, China.
| |
Collapse
|
21
|
Liang B, Tang C, Zhang W, Xu M, Wu T. N-Net: an UNet architecture with dual encoder for medical image segmentation. SIGNAL, IMAGE AND VIDEO PROCESSING 2023; 17:1-9. [PMID: 37362231 PMCID: PMC10031177 DOI: 10.1007/s11760-023-02528-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 07/08/2022] [Accepted: 02/07/2023] [Indexed: 06/28/2023]
Abstract
In order to assist physicians in diagnosis and treatment planning, accurate and automatic methods of organ segmentation are needed in clinical practice. UNet and its improved models, such as UNet + + and UNt3 + , have been powerful tools for medical image segmentation. In this paper, we focus on helping the encoder extract richer features and propose a N-Net for medical image segmentation. On the basis of UNet, we propose a dual encoder model to deepen the network depth and enhance the ability of feature extraction. In our implementation, the Squeeze-and-Excitation (SE) module is added to the dual encoder model to obtain channel-level global features. In addition, the introduction of full-scale skip connections promotes the integration of low-level details and high-level semantic information. The performance of our model is tested on the lung and liver datasets, and compared with UNet, UNet + + and UNet3 + in terms of quantitative evaluation with the Dice, Recall, Precision and F1 score and qualitative evaluation. Our experiments demonstrate that N-Net outperforms the work of UNet, UNet + + and UNet3 + in these three datasets. By visual comparison of the segmentation results, N-Net produces more coherent organ boundaries and finer details.
Collapse
Affiliation(s)
- Bingtao Liang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072 China
| | - Chen Tang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072 China
| | - Wei Zhang
- Tianjin Key Laboratory of Ophthalmology and Visual Science, Tianjin Eye Institute, Clinical College of Ophthalmology of Tianjin Medical University, Tianjin Eye Hospital, Tianjin, 300020 China
| | - Min Xu
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072 China
| | - Tianbo Wu
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072 China
| |
Collapse
|
22
|
Zhang B, Wang Y, Ding C, Deng Z, Li L, Qin Z, Ding Z, Bian L, Yang C. Multi-scale feature pyramid fusion network for medical image segmentation. Int J Comput Assist Radiol Surg 2023; 18:353-365. [PMID: 36042149 DOI: 10.1007/s11548-022-02738-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 08/11/2022] [Indexed: 02/03/2023]
Abstract
PURPOSE Medical image segmentation is the most widely used technique in diagnostic and clinical research. However, accurate segmentation of target organs from blurred border regions and low-contrast adjacent organs in Computed tomography (CT) imaging is crucial for clinical diagnosis and treatment. METHODS In this article, we propose a Multi-Scale Feature Pyramid Fusion Network (MS-Net) based on the codec structure formed by the combination of Multi-Scale Attention Module (MSAM) and Stacked Feature Pyramid Module (SFPM). Among them, MSAM is used to skip connections, which aims to extract different levels of context details by dynamically adjusting the receptive fields under different network depths; the SFPM including multi-scale strategies and multi-layer Feature Perception Module (FPM) is nested in the network at the deepest point, which aims to better focus the network's attention on the target organ by adaptively increasing the weight of the features of interest. RESULTS Experiments demonstrate that the proposed MS-Net significantly improved the Dice score from 91.74% to 94.54% on CHAOS, from 97.59% to 98.59% on Lung, and from 82.55% to 86.06% on ISIC 2018, compared with U-Net. Additionally, comparisons with other six state-of-the-art codec structures also show the presented network has great advantages on evaluation indicators such as Miou, Dice, ACC and AUC. CONCLUSION The experimental results show that both the MSAM and SFPM techniques proposed in this paper can assist the network to improve the segmentation effect, so that the proposed MS-Net method achieves better results in the CHAOS, Lung and ISIC 2018 segmentation tasks.
Collapse
Affiliation(s)
- Bing Zhang
- Power Systems Engineering Research Center, Ministry of Education, College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China
| | - Yang Wang
- Power Systems Engineering Research Center, Ministry of Education, College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China
| | - Caifu Ding
- Power Systems Engineering Research Center, Ministry of Education, College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China
| | - Ziqing Deng
- Power Systems Engineering Research Center, Ministry of Education, College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China
| | - Linwei Li
- Power Systems Engineering Research Center, Ministry of Education, College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China
| | - Zesheng Qin
- Power Systems Engineering Research Center, Ministry of Education, College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China
| | - Zhao Ding
- Power Systems Engineering Research Center, Ministry of Education, College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China
| | - Lifeng Bian
- Frontier Institute of Chip and System, Fudan University, Shanghai, 200433, China.
| | - Chen Yang
- Power Systems Engineering Research Center, Ministry of Education, College of Big Data and Information Engineering, Guizhou University, Guiyang, 550025, China.
| |
Collapse
|
23
|
Moniruzzaman MD, Rassau A, Chai D, Islam SMS. Long future frame prediction using optical flow‐informed deep neural networks for enhancement of robotic teleoperation in high latency environments. J FIELD ROBOT 2022. [DOI: 10.1002/rob.22135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Affiliation(s)
- M. D. Moniruzzaman
- School of Engineering Edith Cowan University Joondalup Western Australia Australia
| | - Alexander Rassau
- School of Engineering Edith Cowan University Joondalup Western Australia Australia
| | - Douglas Chai
- School of Engineering Edith Cowan University Joondalup Western Australia Australia
| | | |
Collapse
|
24
|
Zhu G, Luo X, Yang T, Cai L, Yeo JH, Yan G, Yang J. Deep learning-based recognition and segmentation of intracranial aneurysms under small sample size. Front Physiol 2022; 13:1084202. [PMID: 36601346 PMCID: PMC9806214 DOI: 10.3389/fphys.2022.1084202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Accepted: 11/28/2022] [Indexed: 12/23/2022] Open
Abstract
The manual identification and segmentation of intracranial aneurysms (IAs) involved in the 3D reconstruction procedure are labor-intensive and prone to human errors. To meet the demands for routine clinical management and large cohort studies of IAs, fast and accurate patient-specific IA reconstruction becomes a research Frontier. In this study, a deep-learning-based framework for IA identification and segmentation was developed, and the impacts of image pre-processing and convolutional neural network (CNN) architectures on the framework's performance were investigated. Three-dimensional (3D) segmentation-dedicated architectures, including 3D UNet, VNet, and 3D Res-UNet were evaluated. The dataset used in this study included 101 sets of anonymized cranial computed tomography angiography (CTA) images with 140 IA cases. After the labeling and image pre-processing, a training set and test set containing 112 and 28 IA lesions were used to train and evaluate the convolutional neural network mentioned above. The performances of three convolutional neural networks were compared in terms of training performance, segmentation performance, and segmentation efficiency using multiple quantitative metrics. All the convolutional neural networks showed a non-zero voxel-wise recall (V-Recall) at the case level. Among them, 3D UNet exhibited a better overall segmentation performance under the relatively small sample size. The automatic segmentation results based on 3D UNet reached an average V-Recall of 0.797 ± 0.140 (3.5% and 17.3% higher than that of VNet and 3D Res-UNet), as well as an average dice similarity coefficient (DSC) of 0.818 ± 0.100, which was 4.1%, and 11.7% higher than VNet and 3D Res-UNet. Moreover, the average Hausdorff distance (HD) of the 3D UNet was 3.323 ± 3.212 voxels, which was 8.3% and 17.3% lower than that of VNet and 3D Res-UNet. The three-dimensional deviation analysis results also showed that the segmentations of 3D UNet had the smallest deviation with a max distance of +1.4760/-2.3854 mm, an average distance of 0.3480 mm, a standard deviation (STD) of 0.5978 mm, a root mean square (RMS) of 0.7269 mm. In addition, the average segmentation time (AST) of the 3D UNet was 0.053s, equal to that of 3D Res-UNet and 8.62% shorter than VNet. The results from this study suggested that the proposed deep learning framework integrated with 3D UNet can provide fast and accurate IA identification and segmentation.
Collapse
Affiliation(s)
- Guangyu Zhu
- School of Energy and Power Engineering, Xi’an Jiaotong University, Xi’an, China,*Correspondence: Guangyu Zhu, ; Jian Yang,
| | - Xueqi Luo
- School of Energy and Power Engineering, Xi’an Jiaotong University, Xi’an, China
| | - Tingting Yang
- School of Energy and Power Engineering, Xi’an Jiaotong University, Xi’an, China
| | - Li Cai
- Xi’an Key Laboratory of Scientific Computation and Applied Statistics, Xi’an, China,School of Mathematics and Statistics, Northwestern Polytechnical University, Xi’an, China
| | - Joon Hock Yeo
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore, Singapore
| | - Ge Yan
- Department of Radiology, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China
| | - Jian Yang
- Department of Radiology, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China,*Correspondence: Guangyu Zhu, ; Jian Yang,
| |
Collapse
|
25
|
Tran J, Sharma D, Gotlieb N, Xu W, Bhat M. Application of machine learning in liver transplantation: a review. Hepatol Int 2022; 16:495-508. [PMID: 35020154 DOI: 10.1007/s12072-021-10291-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 12/15/2021] [Indexed: 12/12/2022]
Abstract
BACKGROUND Machine learning (ML) has been increasingly applied in the health-care and liver transplant setting. The demand for liver transplantation continues to expand on an international scale, and with advanced aging and complex comorbidities, many challenges throughout the transplantation decision-making process must be better addressed. There exist massive datasets with hidden, non-linear relationships between demographic, clinical, laboratory, genetic, and imaging parameters that conventional methods fail to capitalize on when reviewing their predictive potential. Pre-transplant challenges include addressing efficacies of liver segmentation, hepatic steatosis assessment, and graft allocation. Post-transplant applications include predicting patient survival, graft rejection and failure, and post-operative morbidity risk. AIM In this review, we describe a comprehensive summary of ML applications in liver transplantation including the clinical context and how to overcome challenges for clinical implementation. METHODS Twenty-nine articles were identified from Ovid MEDLINE, MEDLINE Epub Ahead of Print and In-Process and Other Non-Indexed Citations, Embase, Cochrane Database of Systematic Reviews, and Cochrane Central Register of Controlled Trials. CONCLUSION ML is vastly interrogated in liver transplantation with promising applications in pre- and post-transplant settings. Although challenges exist including site-specific training requirements, the demand for more multi-center studies, and optimization hurdles for clinical interpretability, the powerful potential of ML merits further exploration to enhance patient care.
Collapse
Affiliation(s)
- Jason Tran
- Department of Medicine, University of Ottawa, Ottawa, Canada
| | - Divya Sharma
- Department of Biostatistics, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
- Department of Biostatistics, Princess Margaret Cancer Center, University Health Network, Toronto, ON, Canada
| | - Neta Gotlieb
- Ajmera Transplant Program, University Health Network, Toronto, ON, Canada
| | - Wei Xu
- Department of Biostatistics, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
- Department of Biostatistics, Princess Margaret Cancer Center, University Health Network, Toronto, ON, Canada
| | - Mamatha Bhat
- Ajmera Transplant Program, University Health Network, Toronto, ON, Canada.
- Division of Gastroenterology, Department of Medicine, University of Toronto, 585 University Avenue, Toronto, ON, M5G 2N2, Canada.
| |
Collapse
|
26
|
Altini N, Prencipe B, Cascarano GD, Brunetti A, Brunetti G, Triggiani V, Carnimeo L, Marino F, Guerriero A, Villani L, Scardapane A, Bevilacqua V. Liver, kidney and spleen segmentation from CT scans and MRI with deep learning: A survey. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.08.157] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
27
|
Ansari MY, Abdalla A, Ansari MY, Ansari MI, Malluhi B, Mohanty S, Mishra S, Singh SS, Abinahed J, Al-Ansari A, Balakrishnan S, Dakua SP. Practical utility of liver segmentation methods in clinical surgeries and interventions. BMC Med Imaging 2022; 22:97. [PMID: 35610600 PMCID: PMC9128093 DOI: 10.1186/s12880-022-00825-2] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 05/09/2022] [Indexed: 12/15/2022] Open
Abstract
Clinical imaging (e.g., magnetic resonance imaging and computed tomography) is a crucial adjunct for clinicians, aiding in the diagnosis of diseases and planning of appropriate interventions. This is especially true in malignant conditions such as hepatocellular carcinoma (HCC), where image segmentation (such as accurate delineation of liver and tumor) is the preliminary step taken by the clinicians to optimize diagnosis, staging, and treatment planning and intervention (e.g., transplantation, surgical resection, radiotherapy, PVE, embolization, etc). Thus, segmentation methods could potentially impact the diagnosis and treatment outcomes. This paper comprehensively reviews the literature (during the year 2012-2021) for relevant segmentation methods and proposes a broad categorization based on their clinical utility (i.e., surgical and radiological interventions) in HCC. The categorization is based on the parameters such as precision, accuracy, and automation.
Collapse
|
28
|
Mariam K, Afzal OM, Hussain W, Javed MU, Kiyani A, Rajpoot N, Khurram SA, Khan HA. On Smart Gaze based Annotation of Histopathology Images for Training of Deep Convolutional Neural Networks. IEEE J Biomed Health Inform 2022; 26:3025-3036. [PMID: 35130177 DOI: 10.1109/jbhi.2022.3148944] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Unavailability of large training datasets is a bottleneck that needs to be overcome to realize the true potential of deep learning in histopathology applications. Although slide digitization via whole slide imaging scanners has increased the speed of data acquisition, labeling of virtual slides requires a substantial time investment from pathologists. Eye gaze annotations have the potential to speed up the slide labeling process. This work explores the viability and timing comparisons of eye gaze labeling compared to conventional manual labeling for training object detectors. Challenges associated with gaze based labeling and methods to refine the coarse data annotations for subsequent object detection are also discussed. Results demonstrate that gaze tracking based labeling can save valuable pathologist time and delivers good performance when employed for training a deep object detector. Using the task of localization of Keratin Pearls in cases of oral squamous cell carcinoma as a test case, we compare the performance gap between deep object detectors trained using hand-labelled and gaze-labelled data. On average, compared to 'Bounding-box' based hand-labeling, gaze-labeling required 57.6% less time per label and compared to 'Freehand' labeling, gaze-labeling required on average 85% less time per label.
Collapse
|
29
|
Hansen S, Gautam S, Jenssen R, Kampffmeyer M. Anomaly Detection-Inspired Few-Shot Medical Image Segmentation Through Self-Supervision With Supervoxels. Med Image Anal 2022; 78:102385. [DOI: 10.1016/j.media.2022.102385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 01/20/2022] [Accepted: 02/01/2022] [Indexed: 10/19/2022]
|
30
|
Li R, Chen X. An efficient interactive multi-label segmentation tool for 2D and 3D medical images using fully connected conditional random field. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 213:106534. [PMID: 34839271 DOI: 10.1016/j.cmpb.2021.106534] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Revised: 09/18/2021] [Accepted: 11/12/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE Image segmentation is a crucial and fundamental step in many medical image analysis tasks, such as tumor measurement, surgery planning, disease diagnosis, etc. To ensure the quality of image segmentation, most of the current solutions require labor-intensive manual processes by tracing the boundaries of the objects. The workload increases tremendously for the case of three dimensional (3D) image with multiple objects to be segmented. METHOD In this paper, we introduce our developed interactive image segmentation tool that provides efficient segmentation of multiple labels for both 2D and 3D medical images. The core segmentation method is based on a fast implementation of the fully connected conditional random field. The software also enables automatic recommendation of the next slice to be annotated in 3D, leading to a higher efficiency. RESULTS We have evaluated the tool on many 2D and 3D medical image modalities (e.g. CT, MRI, ultrasound, X-ray, etc.) and different objects of interest (abdominal organs, tumor, bones, etc.), in terms of segmentation accuracy, repeatability and computational time. CONCLUSION In contrast to other interactive image segmentation tools, our software produces high quality image segmentation results without the requirement of parameter tuning for each application. Both the software and source code are freely available for research purpose1. 1Software and source code download: https://drive.google.com/file/d/1JIzWkT3M-X7jeB8tTwVcEw240TGbJAvj/view?usp=sharing.
Collapse
Affiliation(s)
- Ruizhe Li
- Intelligent Modelling and Analysis Group, School of Computer Science, University of Nottingham, UK.
| | - Xin Chen
- Intelligent Modelling and Analysis Group, School of Computer Science, University of Nottingham, UK.
| |
Collapse
|
31
|
Wang J, Lv Y, Wang J, Ma F, Du Y, Fan X, Wang M, Ke J. Fully automated segmentation in temporal bone CT with neural network: a preliminary assessment study. BMC Med Imaging 2021; 21:166. [PMID: 34753454 PMCID: PMC8576911 DOI: 10.1186/s12880-021-00698-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Accepted: 10/26/2021] [Indexed: 01/17/2023] Open
Abstract
BACKGROUND Segmentation of important structures in temporal bone CT is the basis of image-guided otologic surgery. Manual segmentation of temporal bone CT is time- consuming and laborious. We assessed the feasibility and generalization ability of a proposed deep learning model for automated segmentation of critical structures in temporal bone CT scans. METHODS Thirty-nine temporal bone CT volumes including 58 ears were divided into normal (n = 20) and abnormal groups (n = 38). Ossicular chain disruption (n = 10), facial nerve covering vestibular window (n = 10), and Mondini dysplasia (n = 18) were included in abnormal group. All facial nerves, auditory ossicles, and labyrinths of the normal group were manually segmented. For the abnormal group, aberrant structures were manually segmented. Temporal bone CT data were imported into the network in unmarked form. The Dice coefficient (DC) and average symmetric surface distance (ASSD) were used to evaluate the accuracy of automatic segmentation. RESULTS In the normal group, the mean values of DC and ASSD were respectively 0.703, and 0.250 mm for the facial nerve; 0.910, and 0.081 mm for the labyrinth; and 0.855, and 0.107 mm for the ossicles. In the abnormal group, the mean values of DC and ASSD were respectively 0.506, and 1.049 mm for the malformed facial nerve; 0.775, and 0.298 mm for the deformed labyrinth; and 0.698, and 1.385 mm for the aberrant ossicles. CONCLUSIONS The proposed model has good generalization ability, which highlights the promise of this approach for otologist education, disease diagnosis, and preoperative planning for image-guided otology surgery.
Collapse
Affiliation(s)
- Jiang Wang
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, NO. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Yi Lv
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Junchen Wang
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Furong Ma
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, NO. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Yali Du
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, NO. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Xin Fan
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, NO. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Menglin Wang
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, NO. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Jia Ke
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, NO. 49 North Garden Road, Haidian District, Beijing, 100191, China.
| |
Collapse
|
32
|
Wang S, Li C, Wang R, Liu Z, Wang M, Tan H, Wu Y, Liu X, Sun H, Yang R, Liu X, Chen J, Zhou H, Ben Ayed I, Zheng H. Annotation-efficient deep learning for automatic medical image segmentation. Nat Commun 2021; 12:5915. [PMID: 34625565 PMCID: PMC8501087 DOI: 10.1038/s41467-021-26216-9] [Citation(s) in RCA: 63] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 09/22/2021] [Indexed: 01/17/2023] Open
Abstract
Automatic medical image segmentation plays a critical role in scientific research and medical care. Existing high-performance deep learning methods typically rely on large training datasets with high-quality manual annotations, which are difficult to obtain in many clinical applications. Here, we introduce Annotation-effIcient Deep lEarning (AIDE), an open-source framework to handle imperfect training datasets. Methodological analyses and empirical evaluations are conducted, and we demonstrate that AIDE surpasses conventional fully-supervised models by presenting better performance on open datasets possessing scarce or noisy annotations. We further test AIDE in a real-life case study for breast tumor segmentation. Three datasets containing 11,852 breast images from three medical centers are employed, and AIDE, utilizing 10% training annotations, consistently produces segmentation maps comparable to those generated by fully-supervised counterparts or provided by independent radiologists. The 10-fold enhanced efficiency in utilizing expert labels has the potential to promote a wide range of biomedical applications.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China.
- Peng Cheng Laboratory, Shenzhen, Guangdong, China.
- Pazhou Laboratory, Guangzhou, Guangdong, China.
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China.
| | - Rongpin Wang
- Department of Medical Imaging, Guizhou Provincial People's Hospital, Guiyang, Guizhou, China
| | - Zaiyi Liu
- Department of Medical Imaging, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong, China
| | - Meiyun Wang
- Department of Medical Imaging, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Hongna Tan
- Department of Medical Imaging, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Yaping Wu
- Department of Medical Imaging, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Xinfeng Liu
- Department of Medical Imaging, Guizhou Provincial People's Hospital, Guiyang, Guizhou, China
| | - Hui Sun
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Rui Yang
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Xin Liu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Jie Chen
- Peng Cheng Laboratory, Shenzhen, Guangdong, China
- School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, Shenzhen, Guangdong, China
| | - Huihui Zhou
- Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | | | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China.
| |
Collapse
|
33
|
Balch JA, Delitto D, Tighe PJ, Zarrinpar A, Efron PA, Rashidi P, Upchurch GR, Bihorac A, Loftus TJ. Machine Learning Applications in Solid Organ Transplantation and Related Complications. Front Immunol 2021; 12:739728. [PMID: 34603324 PMCID: PMC8481939 DOI: 10.3389/fimmu.2021.739728] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Accepted: 08/25/2021] [Indexed: 11/13/2022] Open
Abstract
The complexity of transplant medicine pushes the boundaries of innate, human reasoning. From networks of immune modulators to dynamic pharmacokinetics to variable postoperative graft survival to equitable allocation of scarce organs, machine learning promises to inform clinical decision making by deciphering prodigious amounts of available data. This paper reviews current research describing how algorithms have the potential to augment clinical practice in solid organ transplantation. We provide a general introduction to different machine learning techniques, describing their strengths, limitations, and barriers to clinical implementation. We summarize emerging evidence that recent advances that allow machine learning algorithms to predict acute post-surgical and long-term outcomes, classify biopsy and radiographic data, augment pharmacologic decision making, and accurately represent the complexity of host immune response. Yet, many of these applications exist in pre-clinical form only, supported primarily by evidence of single-center, retrospective studies. Prospective investigation of these technologies has the potential to unlock the potential of machine learning to augment solid organ transplantation clinical care and health care delivery systems.
Collapse
Affiliation(s)
- Jeremy A Balch
- Department of Surgery, University of Florida Health, Gainesville, FL, United States
| | - Daniel Delitto
- Department of Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Patrick J Tighe
- Department of Anesthesiology, University of Florida Health, Gainesville, FL, United States.,Department of Orthopedics, University of Florida Health, Gainesville, FL, United States.,Department of Information Systems/Operations Management, University of Florida Health, Gainesville, FL, United States
| | - Ali Zarrinpar
- Department of Surgery, University of Florida Health, Gainesville, FL, United States
| | - Philip A Efron
- Department of Surgery, University of Florida Health, Gainesville, FL, United States
| | - Parisa Rashidi
- Department of Biomedical Engineering, University of Florida, Gainesville, FL, United States.,Department of Computer and Information Science and Engineering University of Florida, Gainesville, FL, United States.,Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, United States.,Precision and Intelligent Systems in Medicine (PrismaP), University of Florida, Gainesville, FL, United States
| | - Gilbert R Upchurch
- Department of Surgery, University of Florida Health, Gainesville, FL, United States
| | - Azra Bihorac
- Precision and Intelligent Systems in Medicine (PrismaP), University of Florida, Gainesville, FL, United States.,Department of Medicine, University of Florida Health, Gainesville, FL, United States
| | - Tyler J Loftus
- Department of Surgery, University of Florida Health, Gainesville, FL, United States.,Precision and Intelligent Systems in Medicine (PrismaP), University of Florida, Gainesville, FL, United States
| |
Collapse
|
34
|
Abstract
ABSTRACT Artificial intelligence is poised to revolutionize medical image. It takes advantage of the high-dimensional quantitative features present in medical images that may not be fully appreciated by humans. Artificial intelligence has the potential to facilitate automatic organ segmentation, disease detection and characterization, and prediction of disease recurrence. This article reviews the current status of artificial intelligence in liver imaging and reviews the opportunities and challenges in clinical implementation.
Collapse
|
35
|
Conze PH, Kavur AE, Cornec-Le Gall E, Gezer NS, Le Meur Y, Selver MA, Rousseau F. Abdominal multi-organ segmentation with cascaded convolutional and adversarial deep networks. Artif Intell Med 2021; 117:102109. [PMID: 34127239 DOI: 10.1016/j.artmed.2021.102109] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 01/24/2021] [Accepted: 05/06/2021] [Indexed: 02/05/2023]
Abstract
Abdominal anatomy segmentation is crucial for numerous applications from computer-assisted diagnosis to image-guided surgery. In this context, we address fully-automated multi-organ segmentation from abdominal CT and MR images using deep learning. The proposed model extends standard conditional generative adversarial networks. Additionally to the discriminator which enforces the model to create realistic organ delineations, it embeds cascaded partially pre-trained convolutional encoder-decoders as generator. Encoder fine-tuning from a large amount of non-medical images alleviates data scarcity limitations. The network is trained end-to-end to benefit from simultaneous multi-level segmentation refinements using auto-context. Employed for healthy liver, kidneys and spleen segmentation, our pipeline provides promising results by outperforming state-of-the-art encoder-decoder schemes. Followed for the Combined Healthy Abdominal Organ Segmentation (CHAOS) challenge organized in conjunction with the IEEE International Symposium on Biomedical Imaging 2019, it gave us the first rank for three competition categories: liver CT, liver MR and multi-organ MR segmentation. Combining cascaded convolutional and adversarial networks strengthens the ability of deep learning pipelines to automatically delineate multiple abdominal organs, with good generalization capability. The comprehensive evaluation provided suggests that better guidance could be achieved to help clinicians in abdominal image interpretation and clinical decision making.
Collapse
Affiliation(s)
- Pierre-Henri Conze
- IMT Atlantique, Technopôle Brest-Iroise, 29238 Brest, France; LaTIM UMR 1101, Inserm, 22 avenue Camille Desmoulins, 29238 Brest, France.
| | - Ali Emre Kavur
- Dokuz Eylul University, Cumhuriyet Bulvarı, 35210 Izmir, Turkey
| | - Emilie Cornec-Le Gall
- Department of Nephrology, University Hospital, 2 avenue Foch, 29609 Brest, France; UMR 1078, Inserm, 22 avenue Camille Desmoulins, 29238 Brest, France
| | - Naciye Sinem Gezer
- Dokuz Eylul University, Cumhuriyet Bulvarı, 35210 Izmir, Turkey; Department of Radiology, Faculty of Medicine, Cumhuriyet Bulvarı, 35210 Izmir, Turkey
| | - Yannick Le Meur
- Department of Nephrology, University Hospital, 2 avenue Foch, 29609 Brest, France; LBAI UMR 1227, Inserm, 5 avenue Foch, 29609 Brest, France
| | - M Alper Selver
- Dokuz Eylul University, Cumhuriyet Bulvarı, 35210 Izmir, Turkey
| | - François Rousseau
- IMT Atlantique, Technopôle Brest-Iroise, 29238 Brest, France; LaTIM UMR 1101, Inserm, 22 avenue Camille Desmoulins, 29238 Brest, France
| |
Collapse
|
36
|
Alnazer I, Bourdon P, Urruty T, Falou O, Khalil M, Shahin A, Fernandez-Maloigne C. Recent advances in medical image processing for the evaluation of chronic kidney disease. Med Image Anal 2021; 69:101960. [PMID: 33517241 DOI: 10.1016/j.media.2021.101960] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 11/18/2020] [Accepted: 12/31/2020] [Indexed: 12/31/2022]
Abstract
Assessment of renal function and structure accurately remains essential in the diagnosis and prognosis of Chronic Kidney Disease (CKD). Advanced imaging, including Magnetic Resonance Imaging (MRI), Ultrasound Elastography (UE), Computed Tomography (CT) and scintigraphy (PET, SPECT) offers the opportunity to non-invasively retrieve structural, functional and molecular information that could detect changes in renal tissue properties and functionality. Currently, the ability of artificial intelligence to turn conventional medical imaging into a full-automated diagnostic tool is widely investigated. In addition to the qualitative analysis performed on renal medical imaging, texture analysis was integrated with machine learning techniques as a quantification of renal tissue heterogeneity, providing a promising complementary tool in renal function decline prediction. Interestingly, deep learning holds the ability to be a novel approach of renal function diagnosis. This paper proposes a survey that covers both qualitative and quantitative analysis applied to novel medical imaging techniques to monitor the decline of renal function. First, we summarize the use of different medical imaging modalities to monitor CKD and then, we show the ability of Artificial Intelligence (AI) to guide renal function evaluation from segmentation to disease prediction, discussing how texture analysis and machine learning techniques have emerged in recent clinical researches in order to improve renal dysfunction monitoring and prediction. The paper gives a summary about the role of AI in renal segmentation.
Collapse
Affiliation(s)
- Israa Alnazer
- XLIM-ICONES, UMR CNRS 7252, Université de Poitiers, France; Laboratoire commune CNRS/SIEMENS I3M, Poitiers, France; AZM Center for Research in Biotechnology and its Applications, EDST, Lebanese University, Beirut, Lebanon.
| | - Pascal Bourdon
- XLIM-ICONES, UMR CNRS 7252, Université de Poitiers, France; Laboratoire commune CNRS/SIEMENS I3M, Poitiers, France
| | - Thierry Urruty
- XLIM-ICONES, UMR CNRS 7252, Université de Poitiers, France; Laboratoire commune CNRS/SIEMENS I3M, Poitiers, France
| | - Omar Falou
- AZM Center for Research in Biotechnology and its Applications, EDST, Lebanese University, Beirut, Lebanon; American University of Culture and Education, Koura, Lebanon; Lebanese University, Faculty of Science, Tripoli, Lebanon
| | - Mohamad Khalil
- AZM Center for Research in Biotechnology and its Applications, EDST, Lebanese University, Beirut, Lebanon
| | - Ahmad Shahin
- AZM Center for Research in Biotechnology and its Applications, EDST, Lebanese University, Beirut, Lebanon
| | - Christine Fernandez-Maloigne
- XLIM-ICONES, UMR CNRS 7252, Université de Poitiers, France; Laboratoire commune CNRS/SIEMENS I3M, Poitiers, France
| |
Collapse
|
37
|
Zhang S, Sun H, Su X, Yang X, Wang W, Wan X, Tan Q, Chen N, Yue Q, Gong Q. Automated machine learning to predict the co-occurrence of isocitrate dehydrogenase mutations and O 6 -methylguanine-DNA methyltransferase promoter methylation in patients with gliomas. J Magn Reson Imaging 2021; 54:197-205. [PMID: 33393131 DOI: 10.1002/jmri.27498] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 12/17/2020] [Accepted: 12/18/2020] [Indexed: 02/05/2023] Open
Abstract
Combining isocitrate dehydrogenase mutation (IDHmut) with O6 -methylguanine-DNA methyltransferase promoter methylation (MGMTmet) has been identified as a critical prognostic molecular marker for gliomas. The aim of this study was to determine the ability of glioma radiomics features from magnetic resonance imaging (MRI) to predict the co-occurrence of IDHmut and MGMTmet by applying the tree-based pipeline optimization tool (TPOT), an automated machine learning (autoML) approach. This was a retrospective study, in which 162 patients with gliomas were evaluated, including 58 patients with co-occurrence of IDHmut and MGMTmet and 104 patients with other status comprising: IDH wildtype and MGMT unmethylated (n = 67), IDH wildtype and MGMTmet (n = 36), and IDHmut and MGMT unmethylated (n = 1). Three-dimensional (3D) T1-weighted images, gadolinium-enhanced 3D T1-weighted images (Gd-3DT1WI), T2-weighted images, and fluid-attenuated inversion recovery (FLAIR) images acquired at 3.0 T were used. Radiomics features were extracted from FLAIR and Gd-3DT1WI images. The TPOT was employed to generate the best machine learning pipeline, which contains both feature selector and classifier, based on input feature sets. A 4-fold cross-validation was used to evaluate the performance of automatically generated models. For each iteration, the training set included 121 subjects, while the test set included 41 subjects. Student's t-test or a chi-square test was applied on different clinical characteristics between two groups. Sensitivity, specificity, accuracy, kappa score, and AUC were used to evaluate the performance of TPOT-generated models. Finally, we compared the above metrics of TPOT-generated models to identify the best-performing model. Patients' ages and grades between two groups were significantly different (p = 0.002 and p = 0.000, respectively). The 4-fold cross-validation showed that gradient boosting classifier trained on shape and textual features from the Laplacian-of-Gaussian-filtered Gd-3DT1 achieved the best performance (average sensitivity = 81.1%, average specificity = 94%, average accuracy = 89.4%, average kappa score = 0.76, average AUC = 0.951). Using autoML based on radiomics features from MRI, a high discriminatory accuracy was achieved for predicting co-occurrence of IDHmut and MGMTmet in gliomas. LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY STAGE: 3.
Collapse
Affiliation(s)
- Simin Zhang
- Huaxi MR Research Center (HMRRC), Functional and Molecular Imaging Key Laboratory of Sichuan Province, Department of Radiology, West China Hospital of Sichuan University, Chengdu, China.,Huaxi Glioma Center, West China Hospital of Sichuan University, Chengdu, China
| | - Huaiqiang Sun
- Huaxi MR Research Center (HMRRC), Functional and Molecular Imaging Key Laboratory of Sichuan Province, Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Xiaorui Su
- Huaxi MR Research Center (HMRRC), Functional and Molecular Imaging Key Laboratory of Sichuan Province, Department of Radiology, West China Hospital of Sichuan University, Chengdu, China.,Huaxi Glioma Center, West China Hospital of Sichuan University, Chengdu, China
| | - Xibiao Yang
- Huaxi Glioma Center, West China Hospital of Sichuan University, Chengdu, China.,Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Weina Wang
- Huaxi MR Research Center (HMRRC), Functional and Molecular Imaging Key Laboratory of Sichuan Province, Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Xinyue Wan
- Huaxi MR Research Center (HMRRC), Functional and Molecular Imaging Key Laboratory of Sichuan Province, Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Qiaoyue Tan
- Huaxi MR Research Center (HMRRC), Functional and Molecular Imaging Key Laboratory of Sichuan Province, Department of Radiology, West China Hospital of Sichuan University, Chengdu, China.,Division of Radiation Physics, State Key Laboratory of Biotherapy and Cancer Center, West China Hospital of Sichuan University, Chengdu, China
| | - Ni Chen
- Department of Pathology, West China Hospital of Sichuan University, Chengdu, China
| | - Qiang Yue
- Huaxi Glioma Center, West China Hospital of Sichuan University, Chengdu, China.,Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Qiyong Gong
- Huaxi MR Research Center (HMRRC), Functional and Molecular Imaging Key Laboratory of Sichuan Province, Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| |
Collapse
|
38
|
Kavur AE, Gezer NS, Barış M, Aslan S, Conze PH, Groza V, Pham DD, Chatterjee S, Ernst P, Özkan S, Baydar B, Lachinov D, Han S, Pauli J, Isensee F, Perkonigg M, Sathish R, Rajan R, Sheet D, Dovletov G, Speck O, Nürnberger A, Maier-Hein KH, Bozdağı Akar G, Ünal G, Dicle O, Selver MA. CHAOS Challenge - combined (CT-MR) healthy abdominal organ segmentation. Med Image Anal 2020; 69:101950. [PMID: 33421920 DOI: 10.1016/j.media.2020.101950] [Citation(s) in RCA: 184] [Impact Index Per Article: 36.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 10/26/2020] [Accepted: 12/16/2020] [Indexed: 12/11/2022]
Abstract
Segmentation of abdominal organs has been a comprehensive, yet unresolved, research field for many years. In the last decade, intensive developments in deep learning (DL) introduced new state-of-the-art segmentation systems. Despite outperforming the overall accuracy of existing systems, the effects of DL model properties and parameters on the performance are hard to interpret. This makes comparative analysis a necessary tool towards interpretable studies and systems. Moreover, the performance of DL for emerging learning approaches such as cross-modality and multi-modal semantic segmentation tasks has been rarely discussed. In order to expand the knowledge on these topics, the CHAOS - Combined (CT-MR) Healthy Abdominal Organ Segmentation challenge was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI), 2019, in Venice, Italy. Abdominal organ segmentation from routine acquisitions plays an important role in several clinical applications, such as pre-surgical planning or morphological and volumetric follow-ups for various diseases. These applications require a certain level of performance on a diverse set of metrics such as maximum symmetric surface distance (MSSD) to determine surgical error-margin or overlap errors for tracking size and shape differences. Previous abdomen related challenges are mainly focused on tumor/lesion detection and/or classification with a single modality. Conversely, CHAOS provides both abdominal CT and MR data from healthy subjects for single and multiple abdominal organ segmentation. Five different but complementary tasks were designed to analyze the capabilities of participating approaches from multiple perspectives. The results were investigated thoroughly, compared with manual annotations and interactive methods. The analysis shows that the performance of DL models for single modality (CT / MR) can show reliable volumetric analysis performance (DICE: 0.98 ± 0.00 / 0.95 ± 0.01), but the best MSSD performance remains limited (21.89 ± 13.94 / 20.85 ± 10.63 mm). The performances of participating models decrease dramatically for cross-modality tasks both for the liver (DICE: 0.88 ± 0.15 MSSD: 36.33 ± 21.97 mm). Despite contrary examples on different applications, multi-tasking DL models designed to segment all organs are observed to perform worse compared to organ-specific ones (performance drop around 5%). Nevertheless, some of the successful models show better performance with their multi-organ versions. We conclude that the exploration of those pros and cons in both single vs multi-organ and cross-modality segmentations is poised to have an impact on further research for developing effective algorithms that would support real-world clinical applications. Finally, having more than 1500 participants and receiving more than 550 submissions, another important contribution of this study is the analysis on shortcomings of challenge organizations such as the effects of multiple submissions and peeking phenomenon.
Collapse
Affiliation(s)
- A Emre Kavur
- Graduate School of Natural and Applied Sciences, Dokuz Eylul University, Izmir, Turkey
| | - N Sinem Gezer
- Department of Radiology, Faculty Of Medicine, Dokuz Eylul University, Izmir, Turkey
| | - Mustafa Barış
- Department of Radiology, Faculty Of Medicine, Dokuz Eylul University, Izmir, Turkey
| | - Sinem Aslan
- Ca' Foscari University of Venice, ECLT and DAIS, Venice, Italy; Ege University, International Computer Institute, Izmir, Turkey
| | | | | | - Duc Duy Pham
- Intelligent Systems, Faculty of Engineering, University of Duisburg-Essen, Germany
| | - Soumick Chatterjee
- Data and Knowledge Engineering Group, Otto von Guericke University, Magdeburg, Germany; Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Germany
| | - Philipp Ernst
- Data and Knowledge Engineering Group, Otto von Guericke University, Magdeburg, Germany
| | - Savaş Özkan
- Department of Electrical and Electronics Engineering, Middle East Technical University, Ankara, Turkey
| | - Bora Baydar
- Department of Electrical and Electronics Engineering, Middle East Technical University, Ankara, Turkey
| | - Dmitry Lachinov
- Department of Ophthalmology and Optometry, Medical Uni. of Vienna, Austria
| | - Shuo Han
- Johns Hopkins University, Baltimore, USA
| | - Josef Pauli
- Intelligent Systems, Faculty of Engineering, University of Duisburg-Essen, Germany
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| | - Matthias Perkonigg
- CIR Lab Dept of Biomedical Imaging and Image-guided Therapy Medical Uni. of Vienna, Austria
| | - Rachana Sathish
- Department of Electrical Engineering, Indian Institute of Technology, Kharagpur, India
| | - Ronnie Rajan
- School of Medical Science and Technology, Indian Institute of Technology, Kharagpur, India
| | - Debdoot Sheet
- Department of Electrical Engineering, Indian Institute of Technology, Kharagpur, India
| | - Gurbandurdy Dovletov
- Intelligent Systems, Faculty of Engineering, University of Duisburg-Essen, Germany
| | - Oliver Speck
- Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Germany
| | - Andreas Nürnberger
- Data and Knowledge Engineering Group, Otto von Guericke University, Magdeburg, Germany
| | - Klaus H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| | - Gözde Bozdağı Akar
- Department of Electrical and Electronics Engineering, Middle East Technical University, Ankara, Turkey
| | - Gözde Ünal
- Faculty of Computer and Informatics Engineering, İstanbul Technical University, İstanbul, Turkey
| | - Oğuz Dicle
- Department of Radiology, Faculty Of Medicine, Dokuz Eylul University, Izmir, Turkey
| | - M Alper Selver
- Department of Electrical and Electronics Engineering, Dokuz Eylul University, Izmir, Turkey.
| |
Collapse
|
39
|
Rezaei M, Näppi JJ, Lippert C, Meinel C, Yoshida H. Generative multi-adversarial network for striking the right balance in abdominal image segmentation. Int J Comput Assist Radiol Surg 2020; 15:1847-1858. [PMID: 32897490 PMCID: PMC7603459 DOI: 10.1007/s11548-020-02254-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Accepted: 08/21/2020] [Indexed: 12/30/2022]
Abstract
Purpose The identification of abnormalities that are relatively rare within otherwise normal anatomy is a major challenge for deep learning in the semantic segmentation of medical images. The small number of samples of the minority classes in the training data makes the learning of optimal classification challenging, while the more frequently occurring samples of the majority class hamper the generalization of the classification boundary between infrequently occurring target objects and classes. In this paper, we developed a novel generative multi-adversarial network, called Ensemble-GAN, for mitigating this class imbalance problem in the semantic segmentation of abdominal images.Method The Ensemble-GAN framework is composed of a single-generator and a multi-discriminator variant for handling the class imbalance problem to provide a better generalization than existing approaches. The ensemble model aggregates the estimates of multiple models by training from different initializations and losses from various subsets of the training data. The single generator network analyzes the input image as a condition to predict a corresponding semantic segmentation image by use of feedback from the ensemble of discriminator networks. To evaluate the framework, we trained our framework on two public datasets, with different imbalance ratios and imaging modalities: the Chaos 2019 and the LiTS 2017.Result In terms of the F1 score, the accuracies of the semantic segmentation of healthy spleen, liver, and left and right kidneys were 0.93, 0.96, 0.90 and 0.94, respectively. The overall F1 scores for simultaneous segmentation of the lesions and liver were 0.83 and 0.94, respectively.Conclusion The proposed Ensemble-GAN framework demonstrated outstanding performance in the semantic segmentation of medical images in comparison with other approaches on popular abdominal imaging benchmarks. The Ensemble-GAN has the potential to segment abdominal images more accurately than human experts.
Collapse
Affiliation(s)
- Mina Rezaei
- Hasso Plattner Institute, Prof.Dr. Helmert Street 2-3, Potsdam, Germany
| | - Janne J. Näppi
- Massachusetts General Hospital and Harvard Medical School, 25 New Chardon St., Boston, MS USA
| | - Christoph Lippert
- Hasso Plattner Institute, Prof.Dr. Helmert Street 2-3, Potsdam, Germany
| | - Christoph Meinel
- Hasso Plattner Institute, Prof.Dr. Helmert Street 2-3, Potsdam, Germany
| | - Hiroyuki Yoshida
- Massachusetts General Hospital and Harvard Medical School, 25 New Chardon St., Boston, MS USA
| |
Collapse
|
40
|
Ito R, Iwano S, Naganawa S. A review on the use of artificial intelligence for medical imaging of the lungs of patients with coronavirus disease 2019. Diagn Interv Radiol 2020; 26:443-448. [PMID: 32436845 PMCID: PMC7490030 DOI: 10.5152/dir.2019.20294] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Revised: 05/01/2020] [Accepted: 05/03/2020] [Indexed: 12/23/2022]
Abstract
The results of research on the use of artificial intelligence (AI) for medical imaging of the lungs of patients with coronavirus disease 2019 (COVID-19) has been published in various forms. In this study, we reviewed the AI for diagnostic imaging of COVID-19 pneumonia. PubMed, arXiv, medRxiv, and Google scholar were used to search for AI studies. There were 15 studies of COVID-19 that used AI for medical imaging. Of these, 11 studies used AI for computed tomography (CT) and 4 used AI for chest radiography. Eight studies presented independent test data, 5 used disclosed data, and 4 disclosed the AI source codes. The number of datasets ranged from 106 to 5941, with sensitivities ranging from 0.67-1.00 and specificities ranging from 0.81-1.00 for prediction of COVID-19 pneumonia. Four studies with independent test datasets showed a breakdown of the data ratio and reported prediction of COVID-19 pneumonia with sensitivity, specificity, and area under the curve (AUC). These 4 studies showed very high sensitivity, specificity, and AUC, in the range of 0.9-0.98, 0.91-0.96, and 0.96-0.99, respectively.
Collapse
Affiliation(s)
- Rintaro Ito
- From the Department of Innovative Biomedical Visualization (R.I. ), Nagoya University Graduate School of Medicine, Showa-ku, Nagoya, Japan; Department of Radiology (S.I., S.N.), Nagoya University Graduate School of Medicine, Showa-ku, Nagoya, Japan
| | - Shingo Iwano
- From the Department of Innovative Biomedical Visualization (R.I. ), Nagoya University Graduate School of Medicine, Showa-ku, Nagoya, Japan; Department of Radiology (S.I., S.N.), Nagoya University Graduate School of Medicine, Showa-ku, Nagoya, Japan
| | - Shinji Naganawa
- From the Department of Innovative Biomedical Visualization (R.I. ), Nagoya University Graduate School of Medicine, Showa-ku, Nagoya, Japan; Department of Radiology (S.I., S.N.), Nagoya University Graduate School of Medicine, Showa-ku, Nagoya, Japan
| |
Collapse
|