1
|
Choi W, Kim CH, Yoo H, Yun HR, Kim DW, Kim JW. Development and validation of a reliable method for automated measurements of psoas muscle volume in CT scans using deep learning-based segmentation: a cross-sectional study. BMJ Open 2024; 14:e079417. [PMID: 38777592 PMCID: PMC11116865 DOI: 10.1136/bmjopen-2023-079417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 04/23/2024] [Indexed: 05/25/2024] Open
Abstract
OBJECTIVES We aimed to develop an automated method for measuring the volume of the psoas muscle using CT to aid sarcopenia research efficiently. METHODS We used a data set comprising the CT scans of 520 participants who underwent health check-ups at a health promotion centre. We developed a psoas muscle segmentation model using deep learning in a three-step process based on the nnU-Net method. The automated segmentation method was evaluated for accuracy, reliability, and time required for the measurement. RESULTS The Dice similarity coefficient was used to compare the manual segmentation with automated segmentation; an average Dice score of 0.927 ± 0.019 was obtained, with no critical outliers. Our automated segmentation system had an average measurement time of 2 min 20 s ± 20 s, which was 48 times shorter than that of the manual measurement method (111 min 6 s ± 25 min 25 s). CONCLUSION We have successfully developed an automated segmentation method to measure the psoas muscle volume that ensures consistent and unbiased estimates across a wide range of CT images.
Collapse
Affiliation(s)
- Woorim Choi
- Biomedical Research Center, Asan Medical Center, Songpa-gu, Seoul, Republic of Korea
| | - Chul-Ho Kim
- Department of Orthopedic Surgery, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, Seoul, Republic of Korea
| | - Hyein Yoo
- Biomedical Research Center, Asan Medical Center, Songpa-gu, Seoul, Republic of Korea
| | - Hee Rim Yun
- Coreline Soft Co., Ltd, Mapo-gu, Seoul, Republic of Korea
| | - Da-Wit Kim
- Coreline Soft Co., Ltd, Mapo-gu, Seoul, Republic of Korea
| | - Ji Wan Kim
- Department of Orthopedic Surgery, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, Seoul, Republic of Korea
| |
Collapse
|
2
|
Kim H, Yoo SK, Kim JS, Kim YT, Lee JW, Kim C, Hong CS, Lee H, Han MC, Kim DW, Kim SY, Kim TM, Kim WH, Kong J, Kim YB. Clinical feasibility of deep learning-based synthetic CT images from T2-weighted MR images for cervical cancer patients compared to MRCAT. Sci Rep 2024; 14:8504. [PMID: 38605094 PMCID: PMC11009270 DOI: 10.1038/s41598-024-59014-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 04/05/2024] [Indexed: 04/13/2024] Open
Abstract
This work aims to investigate the clinical feasibility of deep learning-based synthetic CT images for cervix cancer, comparing them to MR for calculating attenuation (MRCAT). Patient cohort with 50 pairs of T2-weighted MR and CT images from cervical cancer patients was split into 40 for training and 10 for testing phases. We conducted deformable image registration and Nyul intensity normalization for MR images to maximize the similarity between MR and CT images as a preprocessing step. The processed images were plugged into a deep learning model, generative adversarial network. To prove clinical feasibility, we assessed the accuracy of synthetic CT images in image similarity using structural similarity (SSIM) and mean-absolute-error (MAE) and dosimetry similarity using gamma passing rate (GPR). Dose calculation was performed on the true and synthetic CT images with a commercial Monte Carlo algorithm. Synthetic CT images generated by deep learning outperformed MRCAT images in image similarity by 1.5% in SSIM, and 18.5 HU in MAE. In dosimetry, the DL-based synthetic CT images achieved 98.71% and 96.39% in the GPR at 1% and 1 mm criterion with 10% and 60% cut-off values of the prescription dose, which were 0.9% and 5.1% greater GPRs over MRCAT images.
Collapse
Affiliation(s)
- Hojin Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Sang Kyun Yoo
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Yong Tae Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Jai Wo Lee
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Changhwan Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Chae-Seon Hong
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Ho Lee
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Min Cheol Han
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Dong Wook Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Se Young Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Tae Min Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Woo Hyoung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Jayoung Kong
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Yong Bae Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea.
| |
Collapse
|
3
|
Dei D, Lambri N, Crespi L, Brioso RC, Loiacono D, Clerici E, Bellu L, De Philippis C, Navarria P, Bramanti S, Carlo-Stella C, Rusconi R, Reggiori G, Tomatis S, Scorsetti M, Mancosu P. Deep learning and atlas-based models to streamline the segmentation workflow of total marrow and lymphoid irradiation. LA RADIOLOGIA MEDICA 2024; 129:515-523. [PMID: 38308062 DOI: 10.1007/s11547-024-01760-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 01/03/2024] [Indexed: 02/04/2024]
Abstract
PURPOSE To improve the workflow of total marrow and lymphoid irradiation (TMLI) by enhancing the delineation of organs at risk (OARs) and clinical target volume (CTV) using deep learning (DL) and atlas-based (AB) segmentation models. MATERIALS AND METHODS Ninety-five TMLI plans optimized in our institute were analyzed. Two commercial DL software were tested for segmenting 18 OARs. An AB model for lymph node CTV (CTV_LN) delineation was built using 20 TMLI patients. The AB model was evaluated on 20 independent patients, and a semiautomatic approach was tested by correcting the automatic contours. The generated OARs and CTV_LN contours were compared to manual contours in terms of topological agreement, dose statistics, and time workload. A clinical decision tree was developed to define a specific contouring strategy for each OAR. RESULTS The two DL models achieved a median [interquartile range] dice similarity coefficient (DSC) of 0.84 [0.71;0.93] and 0.85 [0.70;0.93] across the OARs. The absolute median Dmean difference between manual and the two DL models was 2.0 [0.7;6.6]% and 2.4 [0.9;7.1]%. The AB model achieved a median DSC of 0.70 [0.66;0.74] for CTV_LN delineation, increasing to 0.94 [0.94;0.95] after manual revision, with minimal Dmean differences. Since September 2022, our institution has implemented DL and AB models for all TMLI patients, reducing from 5 to 2 h the time required to complete the entire segmentation process. CONCLUSION DL models can streamline the TMLI contouring process of OARs. Manual revision is still necessary for lymph node delineation using AB models.
Collapse
Affiliation(s)
- Damiano Dei
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, 20072, Pieve Emanuele, Milan, Italy
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Nicola Lambri
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, 20072, Pieve Emanuele, Milan, Italy.
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy.
| | - Leonardo Crespi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
- Health Data Science Centre, Human Technopole, Milan, Italy
| | - Ricardo Coimbra Brioso
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
| | - Daniele Loiacono
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
| | - Elena Clerici
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Luisa Bellu
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, 20072, Pieve Emanuele, Milan, Italy
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Chiara De Philippis
- Department of Oncology and Hematology, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Pierina Navarria
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Stefania Bramanti
- Department of Oncology and Hematology, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Carmelo Carlo-Stella
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, 20072, Pieve Emanuele, Milan, Italy
- Department of Oncology and Hematology, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Roberto Rusconi
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, 20072, Pieve Emanuele, Milan, Italy
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Giacomo Reggiori
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, 20072, Pieve Emanuele, Milan, Italy
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Stefano Tomatis
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Marta Scorsetti
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, 20072, Pieve Emanuele, Milan, Italy
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Pietro Mancosu
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| |
Collapse
|
4
|
Murphy PM. Towards an EKG for SBO: A Neural Network for Detection and Characterization of Bowel Obstruction on CT. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01023-y. [PMID: 38388866 DOI: 10.1007/s10278-024-01023-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 01/07/2024] [Accepted: 01/09/2024] [Indexed: 02/24/2024]
Abstract
A neural network was developed to detect and characterize bowel obstruction, a common cause of acute abdominal pain. In this retrospective study, 202 CT scans of 165 patients with bowel obstruction from March to June 2022 were included and partitioned into training and test data sets. A multi-channel neural network was trained to segment the gastrointestinal tract, and to predict the diameter and the longitudinal position ("longitude") along the gastrointestinal tract using a novel embedding. Its performance was compared to manual segmentations using the Dice score, and to manual measurements of the diameter and longitude using intraclass correlation coefficients (ICC). ROC curves as well as sensitivity and specificity were calculated for diameters above a clinical threshold for obstruction, and for longitudes corresponding to small bowel. In the test data set, Dice score for segmentation of the gastrointestinal tract was 78 ± 8%. ICC between measured and predicted diameters was 0.72, indicating moderate agreement. ICC between measured and predicted longitude was 0.85, indicating good agreement. AUROC was 0.90 for detection of dilated bowel, and was 0.95 and 0.90 for differentiation of the proximal and distal gastrointestinal tract respectively. Overall sensitivity and specificity for dilated small bowel were 0.83 and 0.90. Since obstruction is diagnosed based on the diameter and longitude of the bowel, this neural network and embedding may enable detection and characterization of this important disease on CT.
Collapse
Affiliation(s)
- Paul M Murphy
- University of California-San Diego, UCSD Radiology, 9500 Gilman Dr, La Jolla, 200 W Arbor Dr, San Diego, CA, 92103, USA.
| |
Collapse
|
5
|
Zhang HW, Huang DL, Wang YR, Zhong HS, Pang HW. CT radiomics based on different machine learning models for classifying gross tumor volume and normal liver tissue in hepatocellular carcinoma. Cancer Imaging 2024; 24:20. [PMID: 38279133 PMCID: PMC10811872 DOI: 10.1186/s40644-024-00652-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 12/29/2023] [Indexed: 01/28/2024] Open
Abstract
BACKGROUND & AIMS The present study utilized extracted computed tomography radiomics features to classify the gross tumor volume and normal liver tissue in hepatocellular carcinoma by mainstream machine learning methods, aiming to establish an automatic classification model. METHODS We recruited 104 pathologically confirmed hepatocellular carcinoma patients for this study. GTV and normal liver tissue samples were manually segmented into regions of interest and randomly divided into five-fold cross-validation groups. Dimensionality reduction using LASSO regression. Radiomics models were constructed via logistic regression, support vector machine (SVM), random forest, Xgboost, and Adaboost algorithms. The diagnostic efficacy, discrimination, and calibration of algorithms were verified using area under the receiver operating characteristic curve (AUC) analyses and calibration plot comparison. RESULTS Seven screened radiomics features excelled at distinguishing the gross tumor area. The Xgboost machine learning algorithm had the best discrimination and comprehensive diagnostic performance with an AUC of 0.9975 [95% confidence interval (CI): 0.9973-0.9978] and mean MCC of 0.9369. SVM had the second best discrimination and diagnostic performance with an AUC of 0.9846 (95% CI: 0.9835- 0.9857), mean Matthews correlation coefficient (MCC)of 0.9105, and a better calibration. All other algorithms showed an excellent ability to distinguish between gross tumor area and normal liver tissue (mean AUC 0.9825, 0.9861,0.9727,0.9644 for Adaboost, random forest, logistic regression, naivem Bayes algorithm respectively). CONCLUSION CT radiomics based on machine learning algorithms can accurately classify GTV and normal liver tissue, while the Xgboost and SVM algorithms served as the best complementary algorithms.
Collapse
Affiliation(s)
- Huai-Wen Zhang
- Department of Radiotherapy, The Second Affiliated Hospital of Nanchang Medical College, Jiangxi Clinical Research Center for Cancer, Jiangxi Cancer Hospital, 330029, Nanchang, China
- Department of Oncology, The third people's hospital of Jingdezhen, The third people's hospital of Jingdezhen affiliated to Nanchang Medical College, 333000, Jingdezhen, China
| | - De-Long Huang
- School of Clinical Medicine, Southwest Medical University, 646000, Luzhou, China
| | - Yi-Ren Wang
- School of Nursing, Southwest Medical University, 646000, Luzhou, China
| | - Hao-Shu Zhong
- Department of Hematology, Huashan Hospital, Fudan University, 200040, Shanghai, China.
| | - Hao-Wen Pang
- Department of Oncology, The Affiliated Hospital of Southwest Medical University, 646000, Luzhou, China.
| |
Collapse
|
6
|
Kulkarni C, Sherkhane U, Jaiswar V, Mithun S, Mysore Siddu D, Rangarajan V, Dekker A, Traverso A, Jha A, Wee L. Comparing the performance of a deep learning-based lung gross tumour volume segmentation algorithm before and after transfer learning in a new hospital. BJR Open 2024; 6:tzad008. [PMID: 38352184 PMCID: PMC10860512 DOI: 10.1093/bjro/tzad008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 09/15/2023] [Accepted: 11/20/2023] [Indexed: 02/16/2024] Open
Abstract
Objectives Radiation therapy for lung cancer requires a gross tumour volume (GTV) to be carefully outlined by a skilled radiation oncologist (RO) to accurately pinpoint high radiation dose to a malignant mass while simultaneously minimizing radiation damage to adjacent normal tissues. This is manually intensive and tedious however, it is feasible to train a deep learning (DL) neural network that could assist ROs to delineate the GTV. However, DL trained on large openly accessible data sets might not perform well when applied to a superficially similar task but in a different clinical setting. In this work, we tested the performance of DL automatic lung GTV segmentation model trained on open-access Dutch data when used on Indian patients from a large public tertiary hospital, and hypothesized that generic DL performance could be improved for a specific local clinical context, by means of modest transfer-learning on a small representative local subset. Methods X-ray computed tomography (CT) series in a public data set called "NSCLC-Radiomics" from The Cancer Imaging Archive was first used to train a DL-based lung GTV segmentation model (Model 1). Its performance was assessed using a different open access data set (Interobserver1) of Dutch subjects plus a private Indian data set from a local tertiary hospital (Test Set 2). Another Indian data set (Retrain Set 1) was used to fine-tune the former DL model using a transfer learning method. The Indian data sets were taken from CT of a hybrid scanner based in nuclear medicine, but the GTV was drawn by skilled Indian ROs. The final (after fine-tuning) model (Model 2) was then re-evaluated in "Interobserver1" and "Test Set 2." Dice similarity coefficient (DSC), precision, and recall were used as geometric segmentation performance metrics. Results Model 1 trained exclusively on Dutch scans showed a significant fall in performance when tested on "Test Set 2." However, the DSC of Model 2 recovered by 14 percentage points when evaluated in the same test set. Precision and recall showed a similar rebound of performance after transfer learning, in spite of using a comparatively small sample size. The performance of both models, before and after the fine-tuning, did not significantly change the segmentation performance in "Interobserver1." Conclusions A large public open-access data set was used to train a generic DL model for lung GTV segmentation, but this did not perform well initially in the Indian clinical context. Using transfer learning methods, it was feasible to efficiently and easily fine-tune the generic model using only a small number of local examples from the Indian hospital. This led to a recovery of some of the geometric segmentation performance, but the tuning did not appear to affect the performance of the model in another open-access data set. Advances in knowledge Caution is needed when using models trained on large volumes of international data in a local clinical setting, even when that training data set is of good quality. Minor differences in scan acquisition and clinician delineation preferences may result in an apparent drop in performance. However, DL models have the advantage of being efficiently "adapted" from a generic to a locally specific context, with only a small amount of fine-tuning by means of transfer learning on a small local institutional data set.
Collapse
Affiliation(s)
- Chaitanya Kulkarni
- Philips Research, Philips Innovation Campus, Bengaluru, Karnataka 560045, India
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht 6229 ET, The Netherlands
| | - Umesh Sherkhane
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht 6229 ET, The Netherlands
- Department of Nuclear Medicine and Radiology, Tata Memorial Hospital Mumbai, Mumbai, Maharashtra 400012, India
| | - Vinay Jaiswar
- Department of Nuclear Medicine and Radiology, Tata Memorial Hospital Mumbai, Mumbai, Maharashtra 400012, India
| | - Sneha Mithun
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht 6229 ET, The Netherlands
- Department of Nuclear Medicine and Radiology, Tata Memorial Hospital Mumbai, Mumbai, Maharashtra 400012, India
| | - Dinesh Mysore Siddu
- Philips Research, Philips Innovation Campus, Bengaluru, Karnataka 560045, India
| | - Venkatesh Rangarajan
- Department of Nuclear Medicine and Radiology, Tata Memorial Hospital Mumbai, Mumbai, Maharashtra 400012, India
| | - Andre Dekker
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht 6229 ET, The Netherlands
| | - Alberto Traverso
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht 6229 ET, The Netherlands
- Faculty of Medicine, University Vita Salute, San Raffaele Hospital, 20132 Milan, Italy
| | - Ashish Jha
- Department of Nuclear Medicine and Radiology, Tata Memorial Hospital Mumbai, Mumbai, Maharashtra 400012, India
| | - Leonard Wee
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht 6229 ET, The Netherlands
| |
Collapse
|
7
|
Murphy PM. Visual Image Annotation for Bowel Obstruction: Repeatability and Agreement with Manual Annotation and Neural Networks. J Digit Imaging 2023; 36:2179-2193. [PMID: 37278918 PMCID: PMC10502000 DOI: 10.1007/s10278-023-00825-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Revised: 03/21/2023] [Accepted: 03/29/2023] [Indexed: 06/07/2023] Open
Abstract
Bowel obstruction is a common cause of acute abdominal pain. The development of algorithms for automated detection and characterization of bowel obstruction on CT has been limited by the effort required for manual annotation. Visual image annotation with an eye tracking device may mitigate that limitation. The purpose of this study is to assess the agreement between visual and manual annotations for bowel segmentation and diameter measurement, and to assess agreement with convolutional neural networks (CNNs) trained using that data. Sixty CT scans of 50 patients with bowel obstruction from March to June 2022 were retrospectively included and partitioned into training and test data sets. An eye tracking device was used to record 3-dimensional coordinates within the scans, while a radiologist cast their gaze at the centerline of the bowel, and adjusted the size of a superimposed ROI to approximate the diameter of the bowel. For each scan, 59.4 ± 15.1 segments, 847.9 ± 228.1 gaze locations, and 5.8 ± 1.2 m of bowel were recorded. 2d and 3d CNNs were trained using this data to predict bowel segmentation and diameter maps from the CT scans. For comparisons between two repetitions of visual annotation, CNN predictions, and manual annotations, Dice scores for bowel segmentation ranged from 0.69 ± 0.17 to 0.81 ± 0.04 and intraclass correlations [95% CI] for diameter measurement ranged from 0.672 [0.490-0.782] to 0.940 [0.933-0.947]. Thus, visual image annotation is a promising technique for training CNNs to perform bowel segmentation and diameter measurement in CT scans of patients with bowel obstruction.
Collapse
Affiliation(s)
- Paul M Murphy
- University of California-San Diego, 9500 Gilman Dr, 92093, La Jolla, CA, USA.
- UCSD Radiology, 200 W Arbor Dr, 92103, San Diego, CA, USA.
| |
Collapse
|
8
|
Chwał J, Kostka P, Tkacz E. Assessment of the Extent of Intracerebral Hemorrhage Using 3D Modeling Technology. Healthcare (Basel) 2023; 11:2441. [PMID: 37685475 PMCID: PMC10487057 DOI: 10.3390/healthcare11172441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 08/08/2023] [Accepted: 08/17/2023] [Indexed: 09/10/2023] Open
Abstract
The second most common cause of stroke, accounting for 10% of hospital admissions, is intracerebral hemorrhage (ICH), and risk factors include diabetes, smoking, and hypertension. People with intracerebral bleeding experience symptoms that are related to the functions that are managed by the affected part of the brain. Having obtained 15 computed tomography (CT) scans from five patients with ICH, we decided to use three-dimensional (3D) modeling technology to estimate the bleeding volume. CT was performed on admission to hospital, and after one week and two weeks of treatment. We segmented the brain, ventricles, and hemorrhage using semi-automatic algorithms in Slicer 3D, then improved the obtained models in Blender. Moreover, the accuracy of the models was checked by comparing corresponding CT scans with 3D brain model cross-sections. The goal of the research was to examine the possibility of using 3D modeling technology to visualize intracerebral hemorrhage and assess its treatment.
Collapse
Affiliation(s)
- Joanna Chwał
- Department of Biosensors and Processing of Biomedical Signals, Faculty of Biomedical Engineering, Silesian University of Technology, 44-100 Gliwice, Poland; (P.K.); (E.T.)
- Joint Doctoral School, Silesian University of Technology, 44-100 Gliwice, Poland
| | - Paweł Kostka
- Department of Biosensors and Processing of Biomedical Signals, Faculty of Biomedical Engineering, Silesian University of Technology, 44-100 Gliwice, Poland; (P.K.); (E.T.)
| | - Ewaryst Tkacz
- Department of Biosensors and Processing of Biomedical Signals, Faculty of Biomedical Engineering, Silesian University of Technology, 44-100 Gliwice, Poland; (P.K.); (E.T.)
| |
Collapse
|
9
|
Li Y, Zou B, Dai P, Liao M, Bai HX, Jiao Z. AC-E Network: Attentive Context-Enhanced Network for Liver Segmentation. IEEE J Biomed Health Inform 2023; 27:4052-4061. [PMID: 37204947 DOI: 10.1109/jbhi.2023.3278079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Segmentation of liver from CT scans is essential in computer-aided liver disease diagnosis and treatment. However, the 2DCNN ignores the 3D context, and the 3DCNN suffers from numerous learnable parameters and high computational cost. In order to overcome this limitation, we propose an Attentive Context-Enhanced Network (AC-E Network) consisting of 1) an attentive context encoding module (ACEM) that can be integrated into the 2D backbone to extract 3D context without a sharp increase in the number of learnable parameters; 2) a dual segmentation branch including complemental loss making the network attend to both the liver region and boundary so that getting the segmented liver surface with high accuracy. Extensive experiments on the LiTS and the 3D-IRCADb datasets demonstrate that our method outperforms existing approaches and is competitive to the state-of-the-art 2D-3D hybrid method on the equilibrium of the segmentation precision and the number of model parameters.
Collapse
|
10
|
Tong N, Xu Y, Zhang J, Gou S, Li M. Robust and efficient abdominal CT segmentation using shape constrained multi-scale attention network. Phys Med 2023; 110:102595. [PMID: 37178624 DOI: 10.1016/j.ejmp.2023.102595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 03/02/2023] [Accepted: 04/17/2023] [Indexed: 05/15/2023] Open
Abstract
PURPOSE Although many deep learning-based abdominal multi-organ segmentation networks have been proposed, the various intensity distributions and organ shapes of the CT images from multi-center, multi-phase with various diseases introduce new challenges for robust abdominal CT segmentation. To achieve robust and efficient abdominal multi-organ segmentation, a new two-stage method is presented in this study. METHODS A binary segmentation network is used for coarse localization, followed by a multi-scale attention network for the fine segmentation of liver, kidney, spleen, and pancreas. To constrain the organ shapes produced by the fine segmentation network, an additional network is pre-trained to learn the shape features of the organs with serious diseases and then employed to constrain the training of the fine segmentation network. RESULTS The performance of the presented segmentation method was extensively evaluated on the multi-center data set from the Fast and Low GPU Memory Abdominal oRgan sEgmentation (FLARE) challenge, which was held in conjunction with International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2021. Dice Similarity Coefficient (DSC) and Normalized Surface Dice (NSD) were calculated to quantitatively evaluate the segmentation accuracy and efficiency. An average DSC and NSD of 83.7% and 64.4% were achieved, and our method finally won the second place among more than 90 participating teams. CONCLUSIONS The evaluation results on the public challenge demonstrate that our method shows promising performance in robustness and efficiency, which may promote the clinical application of the automatic abdominal multi-organ segmentation.
Collapse
Affiliation(s)
- Nuo Tong
- AI-based Big Medical Imaging Data Frontier Research Center, Academy of Advanced Interdisciplinary Research, Xidian University, Xi'an, Shaanxi 710071, China
| | - Yinan Xu
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China
| | - Jinsong Zhang
- Xijing Hospital of Air Force Military Medical University, Xian, Shaanxi 710032, China
| | - Shuiping Gou
- AI-based Big Medical Imaging Data Frontier Research Center, Academy of Advanced Interdisciplinary Research, Xidian University, Xi'an, Shaanxi 710071, China; Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China.
| | - Mengbin Li
- Xijing Hospital of Air Force Military Medical University, Xian, Shaanxi 710032, China.
| |
Collapse
|
11
|
Pan S, Chang CW, Wang T, Wynne J, Hu M, Lei Y, Liu T, Patel P, Roper J, Yang X. Abdomen CT multi-organ segmentation using token-based MLP-Mixer. Med Phys 2023; 50:3027-3038. [PMID: 36463516 PMCID: PMC10175083 DOI: 10.1002/mp.16135] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 11/11/2022] [Accepted: 11/15/2022] [Indexed: 12/05/2022] Open
Abstract
BACKGROUND Manual contouring is very labor-intensive, time-consuming, and subject to intra- and inter-observer variability. An automated deep learning approach to fast and accurate contouring and segmentation is desirable during radiotherapy treatment planning. PURPOSE This work investigates an efficient deep-learning-based segmentation algorithm in abdomen computed tomography (CT) to facilitate radiation treatment planning. METHODS In this work, we propose a novel deep-learning model utilizing U-shaped multi-layer perceptron mixer (MLP-Mixer) and convolutional neural network (CNN) for multi-organ segmentation in abdomen CT images. The proposed model has a similar structure to V-net, while a proposed MLP-Convolutional block replaces each convolutional block. The MLP-Convolutional block consists of three components: an early convolutional block for local features extraction and feature resampling, a token-based MLP-Mixer layer for capturing global features with high efficiency, and a token projector for pixel-level detail recovery. We evaluate our proposed network using: (1) an institutional dataset with 60 patient cases and (2) a public dataset (BCTV) with 30 patient cases. The network performance was quantitatively evaluated in three domains: (1) volume similarity between the ground truth contours and the network predictions using the Dice score coefficient (DSC), sensitivity, and precision; (2) surface similarity using Hausdorff distance (HD), mean surface distance (MSD) and residual mean square distance (RMS); and (3) the computational complexity reported by the number of network parameters, training time, and inference time. The performance of the proposed network is compared with other state-of-the-art networks. RESULTS In the institutional dataset, the proposed network achieved the following volume similarity measures when averaged over all organs: DSC = 0.912, sensitivity = 0.917, precision = 0.917, average surface similarities were HD = 11.95 mm, MSD = 1.90 mm, RMS = 3.86 mm. The proposed network achieved DSC = 0.786 and HD = 9.04 mm on the public dataset. The network also shows statistically significant improvement, which is evaluated by a two-tailed Wilcoxon Mann-Whitney U test, on right lung (MSD where the maximum p-value is 0.001), spinal cord (sensitivity, precision, HD, RMSD where p-value ranges from 0.001 to 0.039), and stomach (DSC where the maximum p-value is 0.01) over all other competing networks. On the public dataset, the network report statistically significant improvement, which is shown by the Wilcoxon Mann-Whitney test, on pancreas (HD where the maximum p-value is 0.006), left (HD where the maximum p-value is 0.022) and right adrenal glands (DSC where the maximum p-value is 0.026). In both datasets, the proposed method can generate contours in less than 5 s. Overall, the proposed MLP-Vnet demonstrates comparable or better performance than competing methods with much lower memory complexity and higher speed. CONCLUSIONS The proposed MLP-Vnet demonstrates superior segmentation performance, in terms of accuracy and efficiency, relative to state-of-the-art methods. This reliable and efficient method demonstrates potential to streamline clinical workflows in abdominal radiotherapy, which may be especially important for online adaptive treatments.
Collapse
Affiliation(s)
- Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
- Department of Biomedical Informatics, Emory University, Atlanta, GA 30322, USA
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Jacob Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Mingzhe Hu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
- Department of Biomedical Informatics, Emory University, Atlanta, GA 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tian Liu
- Department of Radiation Oncology, Mount Sinai Medical Center, New York, NY, 10029, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
- Department of Biomedical Informatics, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
12
|
Cheng Z, Wang L. Dynamic hierarchical multi-scale fusion network with axial MLP for medical image segmentation. Sci Rep 2023; 13:6342. [PMID: 37072483 PMCID: PMC10113245 DOI: 10.1038/s41598-023-32813-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 04/03/2023] [Indexed: 05/03/2023] Open
Abstract
Medical image segmentation provides various effective methods for accuracy and robustness of organ segmentation, lesion detection, and classification. Medical images have fixed structures, simple semantics, and diverse details, and thus fusing rich multi-scale features can augment segmentation accuracy. Given that the density of diseased tissue may be comparable to that of surrounding normal tissue, both global and local information are critical for segmentation results. Therefore, considering the importance of multi-scale, global, and local information, in this paper, we propose the dynamic hierarchical multi-scale fusion network with axial mlp (multilayer perceptron) (DHMF-MLP), which integrates the proposed hierarchical multi-scale fusion (HMSF) module. Specifically, HMSF not only reduces the loss of detail information by integrating the features of each stage of the encoder, but also has different receptive fields, thereby improving the segmentation results for small lesions and multi-lesion regions. In HMSF, we not only propose the adaptive attention mechanism (ASAM) to adaptively adjust the semantic conflicts arising during the fusion process but also introduce Axial-mlp to improve the global modeling capability of the network. Extensive experiments on public datasets confirm the excellent performance of our proposed DHMF-MLP. In particular, on the BUSI, ISIC 2018, and GlaS datasets, IoU reaches 70.65%, 83.46%, and 87.04%, respectively.
Collapse
Affiliation(s)
- Zhikun Cheng
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China
| | - Liejun Wang
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China.
| |
Collapse
|
13
|
Reed MB, Ponce de León M, Vraka C, Rausch I, Godbersen GM, Popper V, Geist BK, Komorowski A, Nics L, Schmidt C, Klug S, Langsteger W, Karanikas G, Traub-Weidinger T, Hahn A, Lanzenberger R, Hacker M. Whole-body metabolic connectivity framework with functional PET. Neuroimage 2023; 271:120030. [PMID: 36925087 DOI: 10.1016/j.neuroimage.2023.120030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 02/22/2023] [Accepted: 03/13/2023] [Indexed: 03/15/2023] Open
Abstract
The nervous and circulatory system interconnects the various organs of the human body, building hierarchically organized subsystems, enabling fine-tuned, metabolically expensive brain-body and inter-organ crosstalk to appropriately adapt to internal and external demands. A deviation or failure in the function of a single organ or subsystem could trigger unforeseen biases or dysfunctions of the entire network, leading to maladaptive physiological or psychological responses. Therefore, quantifying these networks in healthy individuals and patients may help further our understanding of complex disorders involving body-brain crosstalk. Here we present a generalized framework to automatically estimate metabolic inter-organ connectivity utilizing whole-body functional positron emission tomography (fPET). The developed framework was applied to 16 healthy subjects (mean age ± SD, 25 ± 6 years; 13 female) that underwent one dynamic 18F-FDG PET/CT scan. Multiple procedures of organ segmentation (manual, automatic, circular volumes) and connectivity estimation (polynomial fitting, spatiotemporal filtering, covariance matrices) were compared to provide an optimized thorough overview of the workflow. The proposed approach was able to estimate the metabolic connectivity patterns within brain regions and organs as well as their interactions. Automated organ delineation, but not simplified circular volumes, showed high agreement with manual delineation. Polynomial fitting yielded similar connectivity as spatiotemporal filtering at the individual subject level. Furthermore, connectivity measures and group-level covariance matrices did not match. The strongest brain-body connectivity was observed for the liver and kidneys. The proposed framework offers novel opportunities towards analyzing metabolic function from a systemic, hierarchical perspective in a multitude of physiological pathological states.
Collapse
Affiliation(s)
- Murray Bruce Reed
- Department of Psychiatry and Psychotherapy, Comprehensive Center for Clinical Neurosciences and Mental Health (C3NMH), Medical University of Vienna, Austria
| | - Magdalena Ponce de León
- Department of Psychiatry and Psychotherapy, Comprehensive Center for Clinical Neurosciences and Mental Health (C3NMH), Medical University of Vienna, Austria
| | - Chrysoula Vraka
- Division of Nuclear Medicine, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Austria
| | - Ivo Rausch
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Godber Mathis Godbersen
- Department of Psychiatry and Psychotherapy, Comprehensive Center for Clinical Neurosciences and Mental Health (C3NMH), Medical University of Vienna, Austria
| | - Valentin Popper
- Department of Psychiatry and Psychotherapy, Comprehensive Center for Clinical Neurosciences and Mental Health (C3NMH), Medical University of Vienna, Austria
| | - Barbara Katharina Geist
- Division of Nuclear Medicine, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Austria
| | - Arkadiusz Komorowski
- Department of Psychiatry and Psychotherapy, Comprehensive Center for Clinical Neurosciences and Mental Health (C3NMH), Medical University of Vienna, Austria
| | - Lukas Nics
- Division of Nuclear Medicine, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Austria
| | - Clemens Schmidt
- Department of Psychiatry and Psychotherapy, Comprehensive Center for Clinical Neurosciences and Mental Health (C3NMH), Medical University of Vienna, Austria
| | - Sebastian Klug
- Department of Psychiatry and Psychotherapy, Comprehensive Center for Clinical Neurosciences and Mental Health (C3NMH), Medical University of Vienna, Austria
| | - Werner Langsteger
- Division of Nuclear Medicine, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Austria
| | - Georgios Karanikas
- Division of Nuclear Medicine, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Austria
| | - Tatjana Traub-Weidinger
- Division of Nuclear Medicine, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Austria
| | - Andreas Hahn
- Department of Psychiatry and Psychotherapy, Comprehensive Center for Clinical Neurosciences and Mental Health (C3NMH), Medical University of Vienna, Austria
| | - Rupert Lanzenberger
- Department of Psychiatry and Psychotherapy, Comprehensive Center for Clinical Neurosciences and Mental Health (C3NMH), Medical University of Vienna, Austria.
| | - Marcus Hacker
- Division of Nuclear Medicine, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Austria
| |
Collapse
|
14
|
Baroudi H, Brock KK, Cao W, Chen X, Chung C, Court LE, El Basha MD, Farhat M, Gay S, Gronberg MP, Gupta AC, Hernandez S, Huang K, Jaffray DA, Lim R, Marquez B, Nealon K, Netherton TJ, Nguyen CM, Reber B, Rhee DJ, Salazar RM, Shanker MD, Sjogreen C, Woodland M, Yang J, Yu C, Zhao Y. Automated Contouring and Planning in Radiation Therapy: What Is 'Clinically Acceptable'? Diagnostics (Basel) 2023; 13:diagnostics13040667. [PMID: 36832155 PMCID: PMC9955359 DOI: 10.3390/diagnostics13040667] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Revised: 01/21/2023] [Accepted: 01/30/2023] [Indexed: 02/12/2023] Open
Abstract
Developers and users of artificial-intelligence-based tools for automatic contouring and treatment planning in radiotherapy are expected to assess clinical acceptability of these tools. However, what is 'clinical acceptability'? Quantitative and qualitative approaches have been used to assess this ill-defined concept, all of which have advantages and disadvantages or limitations. The approach chosen may depend on the goal of the study as well as on available resources. In this paper, we discuss various aspects of 'clinical acceptability' and how they can move us toward a standard for defining clinical acceptability of new autocontouring and planning tools.
Collapse
Affiliation(s)
- Hana Baroudi
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Kristy K. Brock
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Wenhua Cao
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Xinru Chen
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Caroline Chung
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Laurence E. Court
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Correspondence:
| | - Mohammad D. El Basha
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Maguy Farhat
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Skylar Gay
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Mary P. Gronberg
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Aashish Chandra Gupta
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Soleil Hernandez
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Kai Huang
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - David A. Jaffray
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Rebecca Lim
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Barbara Marquez
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Kelly Nealon
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Tucker J. Netherton
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Callistus M. Nguyen
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Brandon Reber
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Dong Joo Rhee
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Ramon M. Salazar
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Mihir D. Shanker
- The University of Queensland, Saint Lucia 4072, Australia
- The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Carlos Sjogreen
- Department of Physics, University of Houston, Houston, TX 77004, USA
| | - McKell Woodland
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Computer Science, Rice University, Houston, TX 77005, USA
| | - Jinzhong Yang
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Cenji Yu
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Yao Zhao
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| |
Collapse
|
15
|
Önder M, Evli C, Türk E, Kazan O, Bayrakdar İŞ, Çelik Ö, Costa ALF, Gomes JPP, Ogawa CM, Jagtap R, Orhan K. Deep-Learning-Based Automatic Segmentation of Parotid Gland on Computed Tomography Images. Diagnostics (Basel) 2023; 13:diagnostics13040581. [PMID: 36832069 PMCID: PMC9955422 DOI: 10.3390/diagnostics13040581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 01/23/2023] [Accepted: 02/02/2023] [Indexed: 02/08/2023] Open
Abstract
This study aims to develop an algorithm for the automatic segmentation of the parotid gland on CT images of the head and neck using U-Net architecture and to evaluate the model's performance. In this retrospective study, a total of 30 anonymized CT volumes of the head and neck were sliced into 931 axial images of the parotid glands. Ground truth labeling was performed with the CranioCatch Annotation Tool (CranioCatch, Eskisehir, Turkey) by two oral and maxillofacial radiologists. The images were resized to 512 × 512 and split into training (80%), validation (10%), and testing (10%) subgroups. A deep convolutional neural network model was developed using U-net architecture. The automatic segmentation performance was evaluated in terms of the F1-score, precision, sensitivity, and the Area Under Curve (AUC) statistics. The threshold for a successful segmentation was determined by the intersection of over 50% of the pixels with the ground truth. The F1-score, precision, and sensitivity of the AI model in segmenting the parotid glands in the axial CT slices were found to be 1. The AUC value was 0.96. This study has shown that it is possible to use AI models based on deep learning to automatically segment the parotid gland on axial CT images.
Collapse
Affiliation(s)
- Merve Önder
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara 06000, Turkey
| | - Cengiz Evli
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara 06000, Turkey
| | - Ezgi Türk
- Dentomaxillofacial Radiology, Oral and Dental Health Center, Hatay 31040, Turkey
| | - Orhan Kazan
- Health Services Vocational School, Gazi University, Ankara 06560, Turkey
| | - İbrahim Şevki Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir 26040, Turkey
- Eskisehir Osmangazi University Center of Research and Application for Computer-Aided Diagnosis and Treatment in Health, Eskişehir 26040, Turkey
- Division of Oral and Maxillofacial Radiology, Department of Care Planning and Restorative Sciences, University of Mississippi Medical Center School of Dentistry, Jackson, MS 39216, USA
| | - Özer Çelik
- Eskisehir Osmangazi University Center of Research and Application for Computer-Aided Diagnosis and Treatment in Health, Eskişehir 26040, Turkey
- Department of Mathematics-Computer, Faculty of Science, Eskisehir Osmangazi University, Eskişehir 26040, Turkey
| | - Andre Luiz Ferreira Costa
- Postgraduate Program in Dentistry, Cruzeiro do Sul University (UNICSUL), São Paulo 01506-000, SP, Brazil
| | - João Pedro Perez Gomes
- Department of Stomatology, Division of General Pathology, School of Dentistry, University of São Paulo (USP), São Paulo 13560-970, SP, Brazil
| | - Celso Massahiro Ogawa
- Postgraduate Program in Dentistry, Cruzeiro do Sul University (UNICSUL), São Paulo 01506-000, SP, Brazil
| | - Rohan Jagtap
- Division of Oral and Maxillofacial Radiology, Department of Care Planning and Restorative Sciences, University of Mississippi Medical Center School of Dentistry, Jackson, MS 39216, USA
| | - Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara 06000, Turkey
- Department of Dental and Maxillofacial Radiodiagnostics, Medical University of Lublin, 20-093 Lublin, Poland
- Ankara University Medical Design Application and Research Center (MEDITAM), Ankara 06000, Turkey
- Correspondence: ; Tel.: +48-81-448-50-00
| |
Collapse
|
16
|
Robinson-Weiss C, Patel J, Bizzo BC, Glazer DI, Bridge CP, Andriole KP, Dabiri B, Chin JK, Dreyer K, Kalpathy-Cramer J, Mayo-Smith WW. Machine Learning for Adrenal Gland Segmentation and Classification of Normal and Adrenal Masses at CT. Radiology 2023; 306:e220101. [PMID: 36125375 DOI: 10.1148/radiol.220101] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
Background Adrenal masses are common, but radiology reporting and recommendations for management can be variable. Purpose To create a machine learning algorithm to segment adrenal glands on contrast-enhanced CT images and classify glands as normal or mass-containing and to assess algorithm performance. Materials and Methods This retrospective study included two groups of contrast-enhanced abdominal CT examinations (development data set and secondary test set). Adrenal glands in the development data set were manually segmented by radiologists. Images in both the development data set and the secondary test set were manually classified as normal or mass-containing. Deep learning segmentation and classification models were trained on the development data set and evaluated on both data sets. Segmentation performance was evaluated with use of the Dice similarity coefficient (DSC), and classification performance with use of sensitivity and specificity. Results The development data set contained 274 CT examinations (251 patients; median age, 61 years; 133 women), and the secondary test set contained 991 CT examinations (991 patients; median age, 62 years; 578 women). The median model DSC on the development test set was 0.80 (IQR, 0.78-0.89) for normal glands and 0.84 (IQR, 0.79-0.90) for adrenal masses. On the development reader set, the median interreader DSC was 0.89 (IQR, 0.78-0.93) for normal glands and 0.89 (IQR, 0.85-0.97) for adrenal masses. Interreader DSC for radiologist manual segmentation did not differ from automated machine segmentation (P = .35). On the development test set, the model had a classification sensitivity of 83% (95% CI: 55, 95) and specificity of 89% (95% CI: 75, 96). On the secondary test set, the model had a classification sensitivity of 69% (95% CI: 58, 79) and specificity of 91% (95% CI: 90, 92). Conclusion A two-stage machine learning pipeline was able to segment the adrenal glands and differentiate normal adrenal glands from those containing masses. © RSNA, 2022 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Cory Robinson-Weiss
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - Jay Patel
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - Bernardo C Bizzo
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - Daniel I Glazer
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - Christopher P Bridge
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - Katherine P Andriole
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - Borna Dabiri
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - John K Chin
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - Keith Dreyer
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - Jayashree Kalpathy-Cramer
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - William W Mayo-Smith
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| |
Collapse
|
17
|
Fully automatic volume measurement of the adrenal gland on CT using deep learning to classify adrenal hyperplasia. Eur Radiol 2022; 33:4292-4302. [PMID: 36571602 DOI: 10.1007/s00330-022-09347-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Revised: 10/03/2022] [Accepted: 11/29/2022] [Indexed: 12/27/2022]
Abstract
OBJECTIVES To develop a fully automated deep learning model for adrenal segmentation and to evaluate its performance in classifying adrenal hyperplasia. METHODS This retrospective study evaluated automated adrenal segmentation in 308 abdominal CT scans from 48 patients with adrenal hyperplasia and 260 patients with normal glands from 2010 to 2021 (mean age, 42 years; 156 women). The dataset was split into training, validation, and test sets at a ratio of 6:2:2. Contrast-enhanced CT images and manually drawn adrenal gland masks were used to develop a U-Net-based segmentation model. Predicted adrenal volumes were obtained by fivefold splitting of the dataset without overlapping the test set. Adrenal volumes and anthropometric parameters (height, weight, and sex) were utilized to develop an algorithm to classify adrenal hyperplasia, using multilayer perceptron, support vector classification, a random forest classifier, and a decision tree classifier. To measure the performance of the developed model, the dice coefficient and intraclass correlation coefficient (ICC) were used for segmentation, and area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity were used for classification. RESULTS The model for segmenting adrenal glands achieved a Dice coefficient of 0.7009 for 308 cases and an ICC of 0.91 (95% CI, 0.90-0.93) for adrenal volume. The models for classifying hyperplasia had the following results: AUC, 0.98-0.99; accuracy, 0.948-0.961; sensitivity, 0.750-0.813; and specificity, 0.973-1.000. CONCLUSION The proposed segmentation algorithm can accurately segment the adrenal glands on CT scans and may help clinicians identify possible cases of adrenal hyperplasia. KEY POINTS • A deep learning segmentation method can accurately segment the adrenal gland, which is a small organ, on CT scans. • The machine learning algorithm to classify adrenal hyperplasia using adrenal volume and anthropometric parameters (height, weight, and sex) showed good performance. • The proposed segmentation algorithm may help clinicians identify possible cases of adrenal hyperplasia.
Collapse
|
18
|
Park JJ, Tiefenbach J, Demetriades AK. The role of artificial intelligence in surgical simulation. FRONTIERS IN MEDICAL TECHNOLOGY 2022; 4:1076755. [PMID: 36590155 PMCID: PMC9794840 DOI: 10.3389/fmedt.2022.1076755] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 11/21/2022] [Indexed: 12/15/2022] Open
Abstract
Artificial Intelligence (AI) plays an integral role in enhancing the quality of surgical simulation, which is increasingly becoming a popular tool for enriching the training experience of a surgeon. This spans the spectrum from facilitating preoperative planning, to intraoperative visualisation and guidance, ultimately with the aim of improving patient safety. Although arguably still in its early stages of widespread clinical application, AI technology enables personal evaluation and provides personalised feedback in surgical training simulations. Several forms of surgical visualisation technologies currently in use for anatomical education and presurgical assessment rely on different AI algorithms. However, while it is promising to see clinical examples and technological reports attesting to the efficacy of AI-supported surgical simulators, barriers to wide-spread commercialisation of such devices and software remain complex and multifactorial. High implementation and production costs, scarcity of reports evidencing the superiority of such technology, and intrinsic technological limitations remain at the forefront. As AI technology is key to driving the future of surgical simulation, this paper will review the literature delineating its current state, challenges, and prospects. In addition, a consolidated list of FDA/CE approved AI-powered medical devices for surgical simulation is presented, in order to shed light on the existing gap between academic achievements and the universal commercialisation of AI-enabled simulators. We call for further clinical assessment of AI-supported surgical simulators to support novel regulatory body approved devices and usher surgery into a new era of surgical education.
Collapse
Affiliation(s)
- Jay J. Park
- Department of General Surgery, Norfolk and Norwich University Hospital, Norwich, United Kingdom,Edinburgh Medical School, University of Edinburgh, Edinburgh, United Kingdom
| | - Jakov Tiefenbach
- Neurological Institute, Cleveland Clinic, Cleveland, OH, United States
| | - Andreas K. Demetriades
- Edinburgh Medical School, University of Edinburgh, Edinburgh, United Kingdom,Department of Neurosurgery, Royal Infirmary of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
19
|
Huang SY, Hsu WL, Hsu RJ, Liu DW. Fully Convolutional Network for the Semantic Segmentation of Medical Images: A Survey. Diagnostics (Basel) 2022; 12:diagnostics12112765. [PMID: 36428824 PMCID: PMC9689961 DOI: 10.3390/diagnostics12112765] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 10/19/2022] [Accepted: 11/04/2022] [Indexed: 11/16/2022] Open
Abstract
There have been major developments in deep learning in computer vision since the 2010s. Deep learning has contributed to a wealth of data in medical image processing, and semantic segmentation is a salient technique in this field. This study retrospectively reviews recent studies on the application of deep learning for segmentation tasks in medical imaging and proposes potential directions for future development, including model development, data augmentation processing, and dataset creation. The strengths and deficiencies of studies on models and data augmentation, as well as their application to medical image segmentation, were analyzed. Fully convolutional network developments have led to the creation of the U-Net and its derivatives. Another noteworthy image segmentation model is DeepLab. Regarding data augmentation, due to the low data volume of medical images, most studies focus on means to increase the wealth of medical image data. Generative adversarial networks (GAN) increase data volume via deep learning. Despite the increasing types of medical image datasets, there is still a deficiency of datasets on specific problems, which should be improved moving forward. Considering the wealth of ongoing research on the application of deep learning processing to medical image segmentation, the data volume and practical clinical application problems must be addressed to ensure that the results are properly applied.
Collapse
Affiliation(s)
- Sheng-Yao Huang
- Institute of Medical Science, Tzu Chi University, Hualien 97071, Taiwan
- Department of Radiation Oncology, Hualien Tzu Chi General Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
| | - Wen-Lin Hsu
- Department of Radiation Oncology, Hualien Tzu Chi General Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- Cancer Center, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- School of Medicine, Tzu Chi University, Hualien 97071, Taiwan
| | - Ren-Jun Hsu
- Institute of Medical Science, Tzu Chi University, Hualien 97071, Taiwan
- Cancer Center, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- School of Medicine, Tzu Chi University, Hualien 97071, Taiwan
- Correspondence: (R.-J.H.); (D.-W.L.); Tel. & Fax: +886-3-8561825 (R.-J.H. & D.-W.L.)
| | - Dai-Wei Liu
- Institute of Medical Science, Tzu Chi University, Hualien 97071, Taiwan
- Department of Radiation Oncology, Hualien Tzu Chi General Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- Cancer Center, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 97071, Taiwan
- School of Medicine, Tzu Chi University, Hualien 97071, Taiwan
- Correspondence: (R.-J.H.); (D.-W.L.); Tel. & Fax: +886-3-8561825 (R.-J.H. & D.-W.L.)
| |
Collapse
|
20
|
Hamabe A, Ishii M, Kamoda R, Sasuga S, Okuya K, Okita K, Akizuki E, Miura R, Korai T, Takemasa I. Artificial intelligence-based technology to make a three-dimensional pelvic model for preoperative simulation of rectal cancer surgery using MRI. Ann Gastroenterol Surg 2022; 6:788-794. [PMID: 36338585 PMCID: PMC9628238 DOI: 10.1002/ags3.12574] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 04/06/2022] [Accepted: 04/12/2022] [Indexed: 02/05/2023] Open
Abstract
AIM A new technique that allows visualization of whole pelvic organs with high accuracy and usability is needed for preoperative simulation in advanced rectal cancer surgery. In this study, we developed an automated algorithm to create a three-dimensional (3D) model from pelvic MRI using artificial intelligence (AI) technology. METHODS This study included a total of 143 patients who underwent 3D MRI in a preoperative examination for rectal cancer. The training dataset included 133 patients, in which ground truth labels were created for pelvic vessels, nerves, and bone. A 3D variant of U-net was used for the network architecture. Ten patients who underwent lateral lymph node dissection were used as a validation dataset. The correctness of the vascular labelling was assessed for pelvic vessels and the Dice similarity coefficients calculated for pelvic bone. RESULTS An automatic segmentation algorithm that extracts the artery, vein, nerve, and pelvic bone was developed, automatically producing a 3D image of the entire pelvis. The total time needed for segmentation was 133 seconds. The success rate of the AI-based segmentation was 100% for the common and external iliac vessels, but the rates for the vesical vein (75%), superior gluteal vein (60%), or accessory obturator vein (63%) were suboptimal. Regarding pelvic bone, the average Dice similarity coefficient between manual and automatic segmentation was 0.97 (standard deviation 0.0043). CONCLUSION Though there is room to improve the segmentation accuracy, the algorithm developed in this study can be utilized for surgical simulation in the treatment of advanced rectal cancer.
Collapse
Affiliation(s)
- Atsushi Hamabe
- Department of Surgery, Surgical Oncology and ScienceSapporo Medical UniversitySapporoJapan
| | - Masayuki Ishii
- Department of Surgery, Surgical Oncology and ScienceSapporo Medical UniversitySapporoJapan
| | | | | | - Koichi Okuya
- Department of Surgery, Surgical Oncology and ScienceSapporo Medical UniversitySapporoJapan
| | - Kenji Okita
- Department of Surgery, Surgical Oncology and ScienceSapporo Medical UniversitySapporoJapan
| | - Emi Akizuki
- Department of Surgery, Surgical Oncology and ScienceSapporo Medical UniversitySapporoJapan
| | - Ryo Miura
- Department of Surgery, Surgical Oncology and ScienceSapporo Medical UniversitySapporoJapan
| | - Takahiro Korai
- Department of Surgery, Surgical Oncology and ScienceSapporo Medical UniversitySapporoJapan
| | - Ichiro Takemasa
- Department of Surgery, Surgical Oncology and ScienceSapporo Medical UniversitySapporoJapan
| |
Collapse
|
21
|
Dillman JR, Somasundaram E, Brady SL, He L. Current and emerging artificial intelligence applications for pediatric abdominal imaging. Pediatr Radiol 2022; 52:2139-2148. [PMID: 33844048 DOI: 10.1007/s00247-021-05057-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 01/25/2021] [Accepted: 03/16/2021] [Indexed: 12/12/2022]
Abstract
Artificial intelligence (AI) uses computers to mimic cognitive functions of the human brain, allowing inferences to be made from generally large datasets. Traditional machine learning (e.g., decision tree analysis, support vector machines) and deep learning (e.g., convolutional neural networks) are two commonly employed AI approaches both outside and within the field of medicine. Such techniques can be used to evaluate medical images for the purposes of automated detection and segmentation, classification tasks (including diagnosis, lesion or tissue characterization, and prediction), and image reconstruction. In this review article we highlight recent literature describing current and emerging AI methods applied to abdominal imaging (e.g., CT, MRI and US) and suggest potential future applications of AI in the pediatric population.
Collapse
Affiliation(s)
- Jonathan R Dillman
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave., Cincinnati, OH, 45229, USA. .,Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| | - Elan Somasundaram
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave., Cincinnati, OH, 45229, USA.,Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Samuel L Brady
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave., Cincinnati, OH, 45229, USA.,Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Lili He
- Department of Radiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave., Cincinnati, OH, 45229, USA.,Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| |
Collapse
|
22
|
Watkins WT, Qing K, Han C, Hui S, Liu A. Auto-segmentation for total marrow irradiation. Front Oncol 2022; 12:970425. [PMID: 36110933 PMCID: PMC9468379 DOI: 10.3389/fonc.2022.970425] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 07/21/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose To evaluate the accuracy and efficiency of Artificial-Intelligence (AI) segmentation in Total Marrow Irradiation (TMI) including contours throughout the head and neck (H&N), thorax, abdomen, and pelvis. Methods An AI segmentation software was clinically introduced for total body contouring in TMI including 27 organs at risk (OARs) and 4 planning target volumes (PTVs). This work compares the clinically utilized contours to the AI-TMI contours for 21 patients. Structure and image dicom data was used to generate comparisons including volumetric, spatial, and dosimetric variations between the AI- and human-edited contour sets. Conventional volume and surface measures including the Sørensen-Dice coefficient (Dice) and the 95th% Hausdorff Distance (HD95) were used, and novel efficiency metrics were introduced. The clinical efficiency gains were estimated by the percentage of the AI-contour-surface within 1mm of the clinical contour surface. An unedited AI-contour has an efficiency gain=100%, an AI-contour with 70% of its surface<1mm from a clinical contour has an efficiency gain of 70%. The dosimetric deviations were estimated from the clinical dose distribution to compute the dose volume histogram (DVH) for all structures. Results A total of 467 contours were compared in the 21 patients. In PTVs, contour surfaces deviated by >1mm in 38.6% ± 23.1% of structures, an average efficiency gain of 61.4%. Deviations >5mm were detected in 12.0% ± 21.3% of the PTV contours. In OARs, deviations >1mm were detected in 24.4% ± 27.1% of the structure surfaces and >5mm in 7.2% ± 18.0%; an average clinical efficiency gain of 75.6%. In H&N OARs, efficiency gains ranged from 42% in optic chiasm to 100% in eyes (unedited in all cases). In thorax, average efficiency gains were >80% in spinal cord, heart, and both lungs. Efficiency gains ranged from 60-70% in spleen, stomach, rectum, and bowel and 75-84% in liver, kidney, and bladder. DVH differences exceeded 0.05 in 109/467 curves at any dose level. The most common 5%-DVH variations were in esophagus (86%), rectum (48%), and PTVs (22%). Conclusions AI auto-segmentation software offers a powerful solution for enhanced efficiency in TMI treatment planning. Whole body segmentation including PTVs and normal organs was successful based on spatial and dosimetric comparison.
Collapse
Affiliation(s)
- William Tyler Watkins
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA, United States
| | | | | | | | | |
Collapse
|
23
|
Gou S, Xu Y, Yang H, Tong N, Zhang X, Wei L, Zhao L, Zheng M, Liu W. Automated cervical tumor segmentation on MR images using multi-view feature attention network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
24
|
Magallon-Baro A, Milder MTW, Granton PV, den Toom W, Nuyttens JJ, Hoogeman MS. Impact of Using Unedited CT-Based DIR-Propagated Autocontours on Online ART for Pancreatic SBRT. Front Oncol 2022; 12:910792. [PMID: 35756687 PMCID: PMC9213731 DOI: 10.3389/fonc.2022.910792] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 05/19/2022] [Indexed: 11/23/2022] Open
Abstract
Purpose To determine the dosimetric impact of using unedited autocontours in daily plan adaptation of patients with locally advanced pancreatic cancer (LAPC) treated with stereotactic body radiotherapy using tumor tracking. Materials and Methods The study included 98 daily CT scans of 35 LAPC patients. All scans were manually contoured (MAN), and included the PTV and main organs-at-risk (OAR): stomach, duodenum and bowel. Precision and MIM deformable image registration (DIR) methods followed by contour propagation were used to generate autocontour sets on the daily CT scans. Autocontours remained unedited, and were compared to MAN on the whole organs and at 3, 1 and 0.5 cm from the PTV. Manual and autocontoured OAR were used to generate daily plans using the VOLO™ optimizer, and were compared to non-adapted plans. Resulting planned doses were compared based on PTV coverage and OAR dose-constraints. Results Overall, both algorithms reported a high agreement between unclipped MAN and autocontours, but showed worse results when being evaluated on the clipped structures at 1 cm and 0.5 cm from the PTV. Replanning with unedited autocontours resulted in better OAR sparing than non-adapted plans for 95% and 84% plans optimized using Precision and MIM autocontours, respectively, and obeyed OAR constraints in 64% and 56% of replans. Conclusion For the majority of fractions, manual correction of autocontours could be avoided or be limited to the region closest to the PTV. This practice could further reduce the overall timings of adaptive radiotherapy workflows for patients with LAPC.
Collapse
Affiliation(s)
- Alba Magallon-Baro
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, Netherlands
| | - Maaike T W Milder
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, Netherlands
| | - Patrick V Granton
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, Netherlands
| | - Wilhelm den Toom
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, Netherlands
| | - Joost J Nuyttens
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, Netherlands
| | - Mischa S Hoogeman
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, Netherlands
| |
Collapse
|
25
|
Altini N, Prencipe B, Cascarano GD, Brunetti A, Brunetti G, Triggiani V, Carnimeo L, Marino F, Guerriero A, Villani L, Scardapane A, Bevilacqua V. Liver, kidney and spleen segmentation from CT scans and MRI with deep learning: A survey. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.08.157] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
26
|
Recurrent neural network to predict hyperelastic constitutive behaviors of the skeletal muscle. Med Biol Eng Comput 2022; 60:1177-1185. [PMID: 35244859 DOI: 10.1007/s11517-022-02541-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 02/23/2022] [Indexed: 10/18/2022]
Abstract
Hyperelastic constitutive laws have been commonly used to model the passive behavior of the human skeletal muscle. Despite many efforts, the use of accurate finite element formulations of hyperelastic constitutive laws is still time-consuming for a real-time medical simulation system. The objective of the present study was to develop a deep learning model to predict the hyperelastic constitutive behaviors of the skeletal muscle toward a fast estimation of the muscle tissue stress.A finite element (FE) model of the right psoas muscle was developed. Neo-Hookean and Mooney-Rivlin laws were used. A tensile test was performed with an applied body force. A learning database was built from this model using an automatic probabilistic generation process. A long-short term memory (LSTM) neural network was implemented to predict the stress evolution of the skeletal muscle tissue. A hyperparameter tuning process was conducted. Root mean square error (RMSE) and associated relative error was quantified to evaluate the precision of the predictive capacity of the developed deep learning model. Pearson correlation coefficients (R) was also computed.The nodal displacements and the maximal stresses range from 70 to 227 mm and from 2.79 to 5.61 MPa for Neo-Hookean and Monney-Rivlin laws, respectively. Regarding the LSTM predictions, the RMSE ranges from 224.3 ± 3.9 Pa (8%) to 227.5 [Formula: see text] 5.7 Pa (4%) for Neo-Hookean and Monney-Rivlin laws, respectively. Pearson correlation coefficients (R) of 0.78 [Formula: see text] 0.02 and 0.77 [Formula: see text] 0.02 were obtained for Neo-Hookean and Monney-Rivlin laws, respectively.The present study showed that, for the first time, the use of a deep learning model can reproduce the time-series behaviors of the complex FE formulations for skeletal muscle modeling. In particular, the use of a LSTM neural network leads to a fast and accurate surrogate model for the in silico prediction of the hyperelastic constitutive behaviors of the skeletal muscle. As perspectives, the developed deep learning model will be integrated into a real-time medical simulation of the skeletal muscle for prosthetic socket design and childbirth simulator.
Collapse
|
27
|
Automated pancreas segmentation and volumetry using deep neural network on computed tomography. Sci Rep 2022; 12:4075. [PMID: 35260710 PMCID: PMC8904764 DOI: 10.1038/s41598-022-07848-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 02/23/2022] [Indexed: 12/14/2022] Open
Abstract
Pancreas segmentation is necessary for observing lesions, analyzing anatomical structures, and predicting patient prognosis. Therefore, various studies have designed segmentation models based on convolutional neural networks for pancreas segmentation. However, the deep learning approach is limited by a lack of data, and studies conducted on a large computed tomography dataset are scarce. Therefore, this study aims to perform deep-learning-based semantic segmentation on 1006 participants and evaluate the automatic segmentation performance of the pancreas via four individual three-dimensional segmentation networks. In this study, we performed internal validation with 1,006 patients and external validation using the cancer imaging archive pancreas dataset. We obtained mean precision, recall, and dice similarity coefficients of 0.869, 0.842, and 0.842, respectively, for internal validation via a relevant approach among the four deep learning networks. Using the external dataset, the deep learning network achieved mean precision, recall, and dice similarity coefficients of 0.779, 0.749, and 0.735, respectively. We expect that generalized deep-learning-based systems can assist clinical decisions by providing accurate pancreatic segmentation and quantitative information of the pancreas for abdominal computed tomography.
Collapse
|
28
|
Seo SY, Kim SJ, Oh JS, Chung J, Kim SY, Oh SJ, Joo S, Kim JS. Unified Deep Learning-Based Mouse Brain MR Segmentation: Template-Based Individual Brain Positron Emission Tomography Volumes-of-Interest Generation Without Spatial Normalization in Mouse Alzheimer Model. Front Aging Neurosci 2022; 14:807903. [PMID: 35309883 PMCID: PMC8931825 DOI: 10.3389/fnagi.2022.807903] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 01/17/2022] [Indexed: 02/03/2023] Open
Abstract
Although skull-stripping and brain region segmentation are essential for precise quantitative analysis of positron emission tomography (PET) of mouse brains, deep learning (DL)-based unified solutions, particularly for spatial normalization (SN), have posed a challenging problem in DL-based image processing. In this study, we propose an approach based on DL to resolve these issues. We generated both skull-stripping masks and individual brain-specific volumes-of-interest (VOIs—cortex, hippocampus, striatum, thalamus, and cerebellum) based on inverse spatial normalization (iSN) and deep convolutional neural network (deep CNN) models. We applied the proposed methods to mutated amyloid precursor protein and presenilin-1 mouse model of Alzheimer’s disease. Eighteen mice underwent T2-weighted MRI and 18F FDG PET scans two times, before and after the administration of human immunoglobulin or antibody-based treatments. For training the CNN, manually traced brain masks and iSN-based target VOIs were used as the label. We compared our CNN-based VOIs with conventional (template-based) VOIs in terms of the correlation of standardized uptake value ratio (SUVR) by both methods and two-sample t-tests of SUVR % changes in target VOIs before and after treatment. Our deep CNN-based method successfully generated brain parenchyma mask and target VOIs, which shows no significant difference from conventional VOI methods in SUVR correlation analysis, thus establishing methods of template-based VOI without SN.
Collapse
Affiliation(s)
- Seung Yeon Seo
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
- Department of Biomedical Engineering, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
| | - Soo-Jong Kim
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
- Department of Biomedical Engineering, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
- Department of Health Sciences and Technology, Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Songpa-gu, South Korea
- Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University, Suwon-si, South Korea
| | - Jungsu S. Oh
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
- *Correspondence: Jungsu S. Oh, ;
| | - Jinwha Chung
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
| | - Seog-Young Kim
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
| | - Seung Jun Oh
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
| | - Segyeong Joo
- Department of Biomedical Engineering, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
| | - Jae Seung Kim
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
| |
Collapse
|
29
|
Chen X, Chen Z, Li J, Zhang YD, Lin X, Qian X. Model-Driven Deep Learning Method for Pancreatic Cancer Segmentation Based on Spiral-Transformation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:75-87. [PMID: 34383646 DOI: 10.1109/tmi.2021.3104460] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Pancreatic cancer is a lethal malignant tumor with one of the worst prognoses. Accurate segmentation of pancreatic cancer is vital in clinical diagnosis and treatment. Due to the unclear boundary and small size of cancers, it is challenging to both manually annotate and automatically segment cancers. Considering 3D information utilization and small sample sizes, we propose a model-driven deep learning method for pancreatic cancer segmentation based on spiral transformation. Specifically, a spiral-transformation algorithm with uniform sampling was developed to map 3D images onto 2D planes while preserving the spatial relationship between textures, thus addressing the challenge in effectively applying 3D contextual information in a 2D model. This study is the first to introduce spiral transformation in a segmentation task to provide effective data augmentation, alleviating the issue of small sample size. Moreover, a transformation-weight-corrected module was embedded into the deep learning model to unify the entire framework. It can achieve 2D segmentation and corresponding 3D rebuilding constraint to overcome non-unique 3D rebuilding results due to the uniform and dense sampling. A smooth regularization based on rebuilding prior knowledge was also designed to optimize segmentation results. The extensive experiments showed that the proposed method achieved a promising segmentation performance on multi-parametric MRIs, where T2, T1, ADC, DWI images obtained the DSC of 65.6%, 64.0%, 64.5%, 65.3%, respectively. This method can provide a novel paradigm to efficiently apply 3D information and augment sample sizes in the development of artificial intelligence for cancer segmentation. Our source codes will be released at https://github.com/SJTUBME-QianLab/ Spiral-Segmentation.
Collapse
|
30
|
Zhu J, Bolsterlee B, Chow BVY, Cai C, Herbert RD, Song Y, Meijering E. Deep learning methods for automatic segmentation of lower leg muscles and bones from MRI scans of children with and without cerebral palsy. NMR IN BIOMEDICINE 2021; 34:e4609. [PMID: 34545647 DOI: 10.1002/nbm.4609] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 08/10/2021] [Accepted: 08/12/2021] [Indexed: 06/13/2023]
Abstract
Cerebral palsy is a neurological condition that is known to affect muscle growth. Detailed investigations of muscle growth require segmentation of muscles from MRI scans, which is typically done manually. In this study, we evaluated the performance of 2D, 3D, and hybrid deep learning models for automatic segmentation of 11 lower leg muscles and two bones from MRI scans of children with and without cerebral palsy. All six models were trained and evaluated on manually segmented T1 -weighted MRI scans of the lower legs of 20 children, six of whom had cerebral palsy. The segmentation results were assessed using the median Dice similarity coefficient (DSC), average symmetric surface distance (ASSD), and volume error (VError) of all 13 labels of every scan. The best performance was achieved by H-DenseUNet, a hybrid model (DSC 0.90, ASSD 0.5 mm, and VError 2.6 cm3 ). The performance was equivalent to the inter-rater performance of manual segmentation (DSC 0.89, ASSD 0.6 mm, and VError 3.3 cm3 ). Models trained with the Dice loss function outperformed models trained with the cross-entropy loss function. Near-optimal performance could be attained using only 11 scans for training. Segmentation performance was similar for scans of typically developing children (DSC 0.90, ASSD 0.5 mm, and VError 2.8 cm3 ) and children with cerebral palsy (DSC 0.85, ASSD 0.6 mm, and VError 2.4 cm3 ). These findings demonstrate the feasibility of fully automatic segmentation of individual muscles and bones from MRI scans of children with and without cerebral palsy.
Collapse
Affiliation(s)
- Jiayi Zhu
- School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
- Neuroscience Research Australia (NeuRA), Sydney, Australia
| | - Bart Bolsterlee
- Neuroscience Research Australia (NeuRA), Sydney, Australia
- Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Australia
| | - Brian V Y Chow
- Neuroscience Research Australia (NeuRA), Sydney, Australia
- School of Medical Sciences, University of New South Wales, Sydney, Australia
| | - Chengxue Cai
- School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
| | - Robert D Herbert
- Neuroscience Research Australia (NeuRA), Sydney, Australia
- School of Medical Sciences, University of New South Wales, Sydney, Australia
| | - Yang Song
- School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
| |
Collapse
|
31
|
Herrmann P, Busana M, Cressoni M, Lotz J, Moerer O, Saager L, Meissner K, Quintel M, Gattinoni L. Using Artificial Intelligence for Automatic Segmentation of CT Lung Images in Acute Respiratory Distress Syndrome. Front Physiol 2021; 12:676118. [PMID: 34594233 PMCID: PMC8476971 DOI: 10.3389/fphys.2021.676118] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 08/17/2021] [Indexed: 01/17/2023] Open
Abstract
Knowledge of gas volume, tissue mass and recruitability measured by the quantitative CT scan analysis (CT-qa) is important when setting the mechanical ventilation in acute respiratory distress syndrome (ARDS). Yet, the manual segmentation of the lung requires a considerable workload. Our goal was to provide an automatic, clinically applicable and reliable lung segmentation procedure. Therefore, a convolutional neural network (CNN) was used to train an artificial intelligence (AI) algorithm on 15 healthy subjects (1,302 slices), 100 ARDS patients (12,279 slices), and 20 COVID-19 (1,817 slices). Eighty percent of this populations was used for training, 20% for testing. The AI and manual segmentation at slice level were compared by intersection over union (IoU). The CT-qa variables were compared by regression and Bland Altman analysis. The AI-segmentation of a single patient required 5–10 s vs. 1–2 h of the manual. At slice level, the algorithm showed on the test set an IOU across all CT slices of 91.3 ± 10.0, 85.2 ± 13.9, and 84.7 ± 14.0%, and across all lung volumes of 96.3 ± 0.6, 88.9 ± 3.1, and 86.3 ± 6.5% for normal lungs, ARDS and COVID-19, respectively, with a U-shape in the performance: better in the lung middle region, worse at the apex and base. At patient level, on the test set, the total lung volume measured by AI and manual segmentation had a R2 of 0.99 and a bias −9.8 ml [CI: +56.0/−75.7 ml]. The recruitability measured with manual and AI-segmentation, as change in non-aerated tissue fraction had a bias of +0.3% [CI: +6.2/−5.5%] and −0.5% [CI: +2.3/−3.3%] expressed as change in well-aerated tissue fraction. The AI-powered lung segmentation provided fast and clinically reliable results. It is able to segment the lungs of seriously ill ARDS patients fully automatically.
Collapse
Affiliation(s)
- Peter Herrmann
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | - Mattia Busana
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | | | - Joachim Lotz
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany
| | - Onnen Moerer
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | - Leif Saager
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | - Konrad Meissner
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | - Michael Quintel
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany.,Department of Anesthesiology, DONAUISAR Klinikum Deggendorf, Deggendorf, Germany
| | - Luciano Gattinoni
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| |
Collapse
|
32
|
Nath V, Yang D, Landman BA, Xu D, Roth HR. Diminishing Uncertainty Within the Training Pool: Active Learning for Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2534-2547. [PMID: 33373298 DOI: 10.1109/tmi.2020.3048055] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Active learning is a unique abstraction of machine learning techniques where the model/algorithm could guide users for annotation of a set of data points that would be beneficial to the model, unlike passive machine learning. The primary advantage being that active learning frameworks select data points that can accelerate the learning process of a model and can reduce the amount of data needed to achieve full accuracy as compared to a model trained on a randomly acquired data set. Multiple frameworks for active learning combined with deep learning have been proposed, and the majority of them are dedicated to classification tasks. Herein, we explore active learning for the task of segmentation of medical imaging data sets. We investigate our proposed framework using two datasets: 1.) MRI scans of the hippocampus, 2.) CT scans of pancreas and tumors. This work presents a query-by-committee approach for active learning where a joint optimizer is used for the committee. At the same time, we propose three new strategies for active learning: 1.) increasing frequency of uncertain data to bias the training data set; 2.) Using mutual information among the input images as a regularizer for acquisition to ensure diversity in the training dataset; 3.) adaptation of Dice log-likelihood for Stein variational gradient descent (SVGD). The results indicate an improvement in terms of data reduction by achieving full accuracy while only using 22.69% and 48.85% of the available data for each dataset, respectively.
Collapse
|
33
|
Urago Y, Okamoto H, Kaneda T, Murakami N, Kashihara T, Takemori M, Nakayama H, Iijima K, Chiba T, Kuwahara J, Katsuta S, Nakamura S, Chang W, Saitoh H, Igaki H. Evaluation of auto-segmentation accuracy of cloud-based artificial intelligence and atlas-based models. Radiat Oncol 2021; 16:175. [PMID: 34503533 PMCID: PMC8427857 DOI: 10.1186/s13014-021-01896-1] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 08/26/2021] [Indexed: 01/13/2023] Open
Abstract
Background Contour delineation, a crucial process in radiation oncology, is time-consuming and inaccurate due to inter-observer variation has been a critical issue in this process. An atlas-based automatic segmentation was developed to improve the delineation efficiency and reduce inter-observer variation. Additionally, automated segmentation using artificial intelligence (AI) has recently become available. In this study, auto-segmentations by atlas- and AI-based models for Organs at Risk (OAR) in patients with prostate and head and neck cancer were performed and delineation accuracies were evaluated. Methods Twenty-one patients with prostate cancer and 30 patients with head and neck cancer were evaluated. MIM Maestro was used to apply the atlas-based segmentation. MIM Contour ProtégéAI was used to apply the AI-based segmentation. Three similarity indices, the Dice similarity coefficient (DSC), Hausdorff distance (HD), and mean distance to agreement (MDA), were evaluated and compared with manual delineations. In addition, radiation oncologists visually evaluated the delineation accuracies. Results Among patients with prostate cancer, the AI-based model demonstrated higher accuracy than the atlas-based on DSC, HD, and MDA for the bladder and rectum. Upon visual evaluation, some errors were observed in the atlas-based delineations when the boundary between the small bowel or the seminal vesicle and the bladder was unclear. For patients with head and neck cancer, no significant differences were observed between the two models for almost all OARs, except small delineations such as the optic chiasm and optic nerve. The DSC tended to be lower when the HD and the MDA were smaller in small volume delineations. Conclusions In terms of efficiency, the processing time for head and neck cancers was much shorter than manual delineation. While quantitative evaluation with AI-based segmentation was significantly more accurate than atlas-based for prostate cancer, there was no significant difference for head and neck cancer. According to the results of visual evaluation, less necessity of manual correction in AI-based segmentation indicates that the segmentation efficiency of AI-based model is higher than that of atlas-based model. The effectiveness of the AI-based model can be expected to improve the segmentation efficiency and to significantly shorten the delineation time. Supplementary Information The online version contains supplementary material available at 10.1186/s13014-021-01896-1.
Collapse
Affiliation(s)
- Yuka Urago
- Department of Radiological Sciences, Graduate School of Human Health Sciences, Tokyo Metropolitan University, 7-2-10 Higashi-Ogu, Arakawa-ku, Tokyo, 116-8551, Japan.,Department of Medical Physics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Hiroyuki Okamoto
- Department of Medical Physics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.
| | - Tomoya Kaneda
- Department of Radiation Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Naoya Murakami
- Department of Radiation Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Tairo Kashihara
- Department of Radiation Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Mihiro Takemori
- Department of Radiological Sciences, Graduate School of Human Health Sciences, Tokyo Metropolitan University, 7-2-10 Higashi-Ogu, Arakawa-ku, Tokyo, 116-8551, Japan.,Department of Medical Physics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Hiroki Nakayama
- Department of Radiological Sciences, Graduate School of Human Health Sciences, Tokyo Metropolitan University, 7-2-10 Higashi-Ogu, Arakawa-ku, Tokyo, 116-8551, Japan.,Department of Medical Physics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Kotaro Iijima
- Department of Medical Physics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Takahito Chiba
- Department of Medical Physics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Junichi Kuwahara
- Department of Medical Physics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan.,Department of Radiological Technology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Shouichi Katsuta
- Department of Radiological Technology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Satoshi Nakamura
- Department of Medical Physics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| | - Weishan Chang
- Department of Radiological Sciences, Graduate School of Human Health Sciences, Tokyo Metropolitan University, 7-2-10 Higashi-Ogu, Arakawa-ku, Tokyo, 116-8551, Japan
| | - Hidetoshi Saitoh
- Department of Radiological Sciences, Graduate School of Human Health Sciences, Tokyo Metropolitan University, 7-2-10 Higashi-Ogu, Arakawa-ku, Tokyo, 116-8551, Japan
| | - Hiroshi Igaki
- Department of Radiation Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045, Japan
| |
Collapse
|
34
|
Abstract
ABSTRACT Artificial intelligence is poised to revolutionize medical image. It takes advantage of the high-dimensional quantitative features present in medical images that may not be fully appreciated by humans. Artificial intelligence has the potential to facilitate automatic organ segmentation, disease detection and characterization, and prediction of disease recurrence. This article reviews the current status of artificial intelligence in liver imaging and reviews the opportunities and challenges in clinical implementation.
Collapse
|
35
|
Lee JG, Kim H, Kang H, Koo HJ, Kang JW, Kim YH, Yang DH. Fully Automatic Coronary Calcium Score Software Empowered by Artificial Intelligence Technology: Validation Study Using Three CT Cohorts. Korean J Radiol 2021; 22:1764-1776. [PMID: 34402248 PMCID: PMC8546141 DOI: 10.3348/kjr.2021.0148] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 04/26/2021] [Accepted: 05/13/2021] [Indexed: 11/26/2022] Open
Abstract
Objective This study aimed to validate a deep learning-based fully automatic calcium scoring (coronary artery calcium [CAC]_auto) system using previously published cardiac computed tomography (CT) cohort data with the manually segmented coronary calcium scoring (CAC_hand) system as the reference standard. Materials and Methods We developed the CAC_auto system using 100 co-registered, non-enhanced and contrast-enhanced CT scans. For the validation of the CAC_auto system, three previously published CT cohorts (n = 2985) were chosen to represent different clinical scenarios (i.e., 2647 asymptomatic, 220 symptomatic, 118 valve disease) and four CT models. The performance of the CAC_auto system in detecting coronary calcium was determined. The reliability of the system in measuring the Agatston score as compared with CAC_hand was also evaluated per vessel and per patient using intraclass correlation coefficients (ICCs) and Bland-Altman analysis. The agreement between CAC_auto and CAC_hand based on the cardiovascular risk stratification categories (Agatston score: 0, 1–10, 11–100, 101–400, > 400) was evaluated. Results In 2985 patients, 6218 coronary calcium lesions were identified using CAC_hand. The per-lesion sensitivity and false-positive rate of the CAC_auto system in detecting coronary calcium were 93.3% (5800 of 6218) and 0.11 false-positive lesions per patient, respectively. The CAC_auto system, in measuring the Agatston score, yielded ICCs of 0.99 for all the vessels (left main 0.91, left anterior descending 0.99, left circumflex 0.96, right coronary 0.99). The limits of agreement between CAC_auto and CAC_hand were 1.6 ± 52.2. The linearly weighted kappa value for the Agatston score categorization was 0.94. The main causes of false-positive results were image noise (29.1%, 97/333 lesions), aortic wall calcification (25.5%, 85/333 lesions), and pericardial calcification (24.3%, 81/333 lesions). Conclusion The atlas-based CAC_auto empowered by deep learning provided accurate calcium score measurement as compared with manual method and risk category classification, which could potentially streamline CAC imaging workflows.
Collapse
Affiliation(s)
- June Goo Lee
- Biomedical Engineering Research Center, Asan Institute for Life Sciences, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - HeeSoo Kim
- Department of Radiology and Research Institute of Radiology, Cardiac Imaging Center, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Heejun Kang
- Divison of Cardiology, Department of Internal Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Hyun Jung Koo
- Department of Radiology and Research Institute of Radiology, Cardiac Imaging Center, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Joon Won Kang
- Department of Radiology and Research Institute of Radiology, Cardiac Imaging Center, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Young Hak Kim
- Divison of Cardiology, Department of Internal Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea.
| | - Dong Hyun Yang
- Department of Radiology and Research Institute of Radiology, Cardiac Imaging Center, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea.
| |
Collapse
|
36
|
Yang Y, Li YX, Yao RQ, Du XH, Ren C. Artificial intelligence in small intestinal diseases: Application and prospects. World J Gastroenterol 2021; 27:3734-3747. [PMID: 34321840 PMCID: PMC8291013 DOI: 10.3748/wjg.v27.i25.3734] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 04/09/2021] [Accepted: 05/08/2021] [Indexed: 02/06/2023] Open
Abstract
The small intestine is located in the middle of the gastrointestinal tract, so small intestinal diseases are more difficult to diagnose than other gastrointestinal diseases. However, with the extensive application of artificial intelligence in the field of small intestinal diseases, with its efficient learning capacities and computational power, artificial intelligence plays an important role in the auxiliary diagnosis and prognosis prediction based on the capsule endoscopy and other examination methods, which improves the accuracy of diagnosis and prediction and reduces the workload of doctors. In this review, a comprehensive retrieval was performed on articles published up to October 2020 from PubMed and other databases. Thereby the application status of artificial intelligence in small intestinal diseases was systematically introduced, and the challenges and prospects in this field were also analyzed.
Collapse
Affiliation(s)
- Yu Yang
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Yu-Xuan Li
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Ren-Qi Yao
- Trauma Research Center, The Fourth Medical Center and Medical Innovation Research Division of the Chinese People‘s Liberation Army General Hospital, Beijing 100048, China
- Department of Burn Surgery, Changhai Hospital, Naval Medical University, Shanghai 200433, China
| | - Xiao-Hui Du
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Chao Ren
- Trauma Research Center, The Fourth Medical Center and Medical Innovation Research Division of the Chinese People‘s Liberation Army General Hospital, Beijing 100048, China
| |
Collapse
|
37
|
Kart T, Fischer M, Küstner T, Hepp T, Bamberg F, Winzeck S, Glocker B, Rueckert D, Gatidis S. Deep Learning-Based Automated Abdominal Organ Segmentation in the UK Biobank and German National Cohort Magnetic Resonance Imaging Studies. Invest Radiol 2021; 56:401-408. [PMID: 33930003 DOI: 10.1097/rli.0000000000000755] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
PURPOSE The aims of this study were to train and evaluate deep learning models for automated segmentation of abdominal organs in whole-body magnetic resonance (MR) images from the UK Biobank (UKBB) and German National Cohort (GNC) MR imaging studies and to make these models available to the scientific community for analysis of these data sets. METHODS A total of 200 T1-weighted MR image data sets of healthy volunteers each from UKBB and GNC (400 data sets in total) were available in this study. Liver, spleen, left and right kidney, and pancreas were segmented manually on all 400 data sets, providing labeled ground truth data for training of a previously described U-Net-based deep learning framework for automated medical image segmentation (nnU-Net). The trained models were tested on all data sets using a 4-fold cross-validation scheme. Qualitative analysis of automated segmentation results was performed visually; performance metrics between automated and manual segmentation results were computed for quantitative analysis. In addition, interobserver segmentation variability between 2 human readers was assessed on a subset of the data. RESULTS Automated abdominal organ segmentation was performed with high qualitative and quantitative accuracy on UKBB and GNC data. In more than 90% of data sets, no or only minor visually detectable qualitative segmentation errors occurred. Mean Dice scores of automated segmentations compared with manual reference segmentations were well higher than 0.9 for the liver, spleen, and kidneys on UKBB and GNC data and around 0.82 and 0.89 for the pancreas on UKBB and GNC data, respectively. Mean average symmetric surface distance was between 0.3 and 1.5 mm for the liver, spleen, and kidneys and between 2 and 2.2 mm for pancreas segmentation. The quantitative accuracy of automated segmentation was comparable with the agreement between 2 human readers for all organs on UKBB and GNC data. CONCLUSION Automated segmentation of abdominal organs is possible with high qualitative and quantitative accuracy on whole-body MR imaging data acquired as part of UKBB and GNC. The results obtained and deep learning models trained in this study can be used as a foundation for automated analysis of thousands of MR data sets of UKBB and GNC and thus contribute to tackling topical and original scientific questions.
Collapse
Affiliation(s)
- Turkay Kart
- From the Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, UK
| | - Marc Fischer
- Medical Image and Data Analysis Lab, Department of Radiology, University Hospital Tübingen, Tübingen, Germany
| | - Thomas Küstner
- Medical Image and Data Analysis Lab, Department of Radiology, University Hospital Tübingen, Tübingen, Germany
| | | | - Fabian Bamberg
- Department of Radiology, University Hospital Freiburg, Freiburg, Germany
| | - Stefan Winzeck
- From the Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, UK
| | - Ben Glocker
- From the Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, UK
| | | | | |
Collapse
|
38
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 59] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
39
|
Kim DW, Lee G, Kim SY, Ahn G, Lee JG, Lee SS, Kim KW, Park SH, Lee YJ, Kim N. Deep learning-based algorithm to detect primary hepatic malignancy in multiphase CT of patients at high risk for HCC. Eur Radiol 2021; 31:7047-7057. [PMID: 33738600 DOI: 10.1007/s00330-021-07803-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 01/28/2021] [Accepted: 02/17/2021] [Indexed: 02/06/2023]
Abstract
OBJECTIVES To develop and evaluate a deep learning-based model capable of detecting primary hepatic malignancies in multiphase CT images of patients at high risk for hepatocellular carcinoma (HCC). METHODS A total of 1350 multiphase CT scans of 1280 hepatic malignancies (1202 HCCs and 78 non-HCCs) in 1320 patients at high risk for HCC were retrospectively analyzed. Following the delineation of the focal hepatic lesions according to reference standards, the CT scans were categorized randomly into the training (568 scans), tuning (193 scans), and test (589 scans) sets. Multiphase CT information was subjected to multichannel integration, and livers were automatically segmented before model development. A deep learning-based model capable of detecting malignancies was developed using a mask region-based convolutional neural network. The thresholds of the prediction score and the intersection over union were determined on the tuning set corresponding to the highest sensitivity with < 5 false-positive cases per CT scan. The sensitivity and the number of false-positives of the proposed model on the test set were calculated. Potential causes of false-negatives and false-positives on the test set were analyzed. RESULTS This model exhibited a sensitivity of 84.8% with 4.80 false-positives per CT scan on the test set. The most frequent potential causes of false-negatives and false-positives were determined to be atypical enhancement patterns for HCC (71.7%) and registration/segmentation errors (42.7%), respectively. CONCLUSIONS The proposed deep learning-based model developed to automatically detect primary hepatic malignancies exhibited an 84.8% of sensitivity with 4.80 false-positives per CT scan in the test set. KEY POINTS • Image processing, including multichannel integration of multiphase CT and automatic liver segmentation, enabled the application of a deep learning-based model to detect primary hepatic malignancy. • Our model exhibited a sensitivity of 84.8% with a false-positive rate of 4.80 per CT scan.
Collapse
Affiliation(s)
- Dong Wook Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea
| | - Gaeun Lee
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - So Yeon Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea.
| | - Geunhwi Ahn
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - June-Goo Lee
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.,Biomedical Engineering Research Center, Asan Institute for Life Sciences, Asan Medical Center, Seoul, Republic of Korea
| | - Seung Soo Lee
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea
| | - Kyung Won Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea
| | - Seong Ho Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea
| | - Yoon Jin Lee
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam-si, Republic of Korea
| | - Namkug Kim
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.,Biomedical Engineering Research Center, Asan Institute for Life Sciences, Asan Medical Center, Seoul, Republic of Korea
| |
Collapse
|
40
|
Kavur AE, Gezer NS, Barış M, Aslan S, Conze PH, Groza V, Pham DD, Chatterjee S, Ernst P, Özkan S, Baydar B, Lachinov D, Han S, Pauli J, Isensee F, Perkonigg M, Sathish R, Rajan R, Sheet D, Dovletov G, Speck O, Nürnberger A, Maier-Hein KH, Bozdağı Akar G, Ünal G, Dicle O, Selver MA. CHAOS Challenge - combined (CT-MR) healthy abdominal organ segmentation. Med Image Anal 2020; 69:101950. [PMID: 33421920 DOI: 10.1016/j.media.2020.101950] [Citation(s) in RCA: 125] [Impact Index Per Article: 31.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 10/26/2020] [Accepted: 12/16/2020] [Indexed: 12/11/2022]
Abstract
Segmentation of abdominal organs has been a comprehensive, yet unresolved, research field for many years. In the last decade, intensive developments in deep learning (DL) introduced new state-of-the-art segmentation systems. Despite outperforming the overall accuracy of existing systems, the effects of DL model properties and parameters on the performance are hard to interpret. This makes comparative analysis a necessary tool towards interpretable studies and systems. Moreover, the performance of DL for emerging learning approaches such as cross-modality and multi-modal semantic segmentation tasks has been rarely discussed. In order to expand the knowledge on these topics, the CHAOS - Combined (CT-MR) Healthy Abdominal Organ Segmentation challenge was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI), 2019, in Venice, Italy. Abdominal organ segmentation from routine acquisitions plays an important role in several clinical applications, such as pre-surgical planning or morphological and volumetric follow-ups for various diseases. These applications require a certain level of performance on a diverse set of metrics such as maximum symmetric surface distance (MSSD) to determine surgical error-margin or overlap errors for tracking size and shape differences. Previous abdomen related challenges are mainly focused on tumor/lesion detection and/or classification with a single modality. Conversely, CHAOS provides both abdominal CT and MR data from healthy subjects for single and multiple abdominal organ segmentation. Five different but complementary tasks were designed to analyze the capabilities of participating approaches from multiple perspectives. The results were investigated thoroughly, compared with manual annotations and interactive methods. The analysis shows that the performance of DL models for single modality (CT / MR) can show reliable volumetric analysis performance (DICE: 0.98 ± 0.00 / 0.95 ± 0.01), but the best MSSD performance remains limited (21.89 ± 13.94 / 20.85 ± 10.63 mm). The performances of participating models decrease dramatically for cross-modality tasks both for the liver (DICE: 0.88 ± 0.15 MSSD: 36.33 ± 21.97 mm). Despite contrary examples on different applications, multi-tasking DL models designed to segment all organs are observed to perform worse compared to organ-specific ones (performance drop around 5%). Nevertheless, some of the successful models show better performance with their multi-organ versions. We conclude that the exploration of those pros and cons in both single vs multi-organ and cross-modality segmentations is poised to have an impact on further research for developing effective algorithms that would support real-world clinical applications. Finally, having more than 1500 participants and receiving more than 550 submissions, another important contribution of this study is the analysis on shortcomings of challenge organizations such as the effects of multiple submissions and peeking phenomenon.
Collapse
Affiliation(s)
- A Emre Kavur
- Graduate School of Natural and Applied Sciences, Dokuz Eylul University, Izmir, Turkey
| | - N Sinem Gezer
- Department of Radiology, Faculty Of Medicine, Dokuz Eylul University, Izmir, Turkey
| | - Mustafa Barış
- Department of Radiology, Faculty Of Medicine, Dokuz Eylul University, Izmir, Turkey
| | - Sinem Aslan
- Ca' Foscari University of Venice, ECLT and DAIS, Venice, Italy; Ege University, International Computer Institute, Izmir, Turkey
| | | | | | - Duc Duy Pham
- Intelligent Systems, Faculty of Engineering, University of Duisburg-Essen, Germany
| | - Soumick Chatterjee
- Data and Knowledge Engineering Group, Otto von Guericke University, Magdeburg, Germany; Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Germany
| | - Philipp Ernst
- Data and Knowledge Engineering Group, Otto von Guericke University, Magdeburg, Germany
| | - Savaş Özkan
- Department of Electrical and Electronics Engineering, Middle East Technical University, Ankara, Turkey
| | - Bora Baydar
- Department of Electrical and Electronics Engineering, Middle East Technical University, Ankara, Turkey
| | - Dmitry Lachinov
- Department of Ophthalmology and Optometry, Medical Uni. of Vienna, Austria
| | - Shuo Han
- Johns Hopkins University, Baltimore, USA
| | - Josef Pauli
- Intelligent Systems, Faculty of Engineering, University of Duisburg-Essen, Germany
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| | - Matthias Perkonigg
- CIR Lab Dept of Biomedical Imaging and Image-guided Therapy Medical Uni. of Vienna, Austria
| | - Rachana Sathish
- Department of Electrical Engineering, Indian Institute of Technology, Kharagpur, India
| | - Ronnie Rajan
- School of Medical Science and Technology, Indian Institute of Technology, Kharagpur, India
| | - Debdoot Sheet
- Department of Electrical Engineering, Indian Institute of Technology, Kharagpur, India
| | - Gurbandurdy Dovletov
- Intelligent Systems, Faculty of Engineering, University of Duisburg-Essen, Germany
| | - Oliver Speck
- Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Germany
| | - Andreas Nürnberger
- Data and Knowledge Engineering Group, Otto von Guericke University, Magdeburg, Germany
| | - Klaus H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| | - Gözde Bozdağı Akar
- Department of Electrical and Electronics Engineering, Middle East Technical University, Ankara, Turkey
| | - Gözde Ünal
- Faculty of Computer and Informatics Engineering, İstanbul Technical University, İstanbul, Turkey
| | - Oğuz Dicle
- Department of Radiology, Faculty Of Medicine, Dokuz Eylul University, Izmir, Turkey
| | - M Alper Selver
- Department of Electrical and Electronics Engineering, Dokuz Eylul University, Izmir, Turkey.
| |
Collapse
|
41
|
Wang Z, Chang Y, Peng Z, Lv Y, Shi W, Wang F, Pei X, Xu XG. Evaluation of deep learning-based auto-segmentation algorithms for delineating clinical target volume and organs at risk involving data for 125 cervical cancer patients. J Appl Clin Med Phys 2020; 21:272-279. [PMID: 33238060 PMCID: PMC7769393 DOI: 10.1002/acm2.13097] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 10/03/2020] [Accepted: 10/21/2020] [Indexed: 12/15/2022] Open
Abstract
Objective To evaluate the accuracy of a deep learning‐based auto‐segmentation mode to that of manual contouring by one medical resident, where both entities tried to mimic the delineation "habits" of the same clinical senior physician. Methods This study included 125 cervical cancer patients whose clinical target volumes (CTVs) and organs at risk (OARs) were delineated by the same senior physician. Of these 125 cases, 100 were used for model training and the remaining 25 for model testing. In addition, the medical resident instructed by the senior physician for approximately 8 months delineated the CTVs and OARs for the testing cases. The dice similarity coefficient (DSC) and the Hausdorff Distance (HD) were used to evaluate the delineation accuracy for CTV, bladder, rectum, small intestine, femoral‐head‐left, and femoral‐head‐right. Results The DSC values of the auto‐segmentation model and manual contouring by the resident were, respectively, 0.86 and 0.83 for the CTV (P < 0.05), 0.91 and 0.91 for the bladder (P > 0.05), 0.88 and 0.84 for the femoral‐head‐right (P < 0.05), 0.88 and 0.84 for the femoral‐head‐left (P < 0.05), 0.86 and 0.81 for the small intestine (P < 0.05), and 0.81 and 0.84 for the rectum (P > 0.05). The HD (mm) values were, respectively, 14.84 and 18.37 for the CTV (P < 0.05), 7.82 and 7.63 for the bladder (P > 0.05), 6.18 and 6.75 for the femoral‐head‐right (P > 0.05), 6.17 and 6.31 for the femoral‐head‐left (P > 0.05), 22.21 and 26.70 for the small intestine (P > 0.05), and 7.04 and 6.13 for the rectum (P > 0.05). The auto‐segmentation model took approximately 2 min to delineate the CTV and OARs while the resident took approximately 90 min to complete the same task. Conclusion The auto‐segmentation model was as accurate as the medical resident but with much better efficiency in this study. Furthermore, the auto‐segmentation approach offers additional perceivable advantages of being consistent and ever improving when compared with manual approaches.
Collapse
Affiliation(s)
- Zhi Wang
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China.,Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Yankui Chang
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China
| | - Zhao Peng
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China
| | - Yin Lv
- Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Weijiong Shi
- Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Fan Wang
- Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xi Pei
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China.,Anhui Wisdom Technology Co., Ltd., Hefei, Anhui, China
| | - X George Xu
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China
| |
Collapse
|
42
|
Shiyam Sundar LK, Muzik O, Buvat I, Bidaut L, Beyer T. Potentials and caveats of AI in hybrid imaging. Methods 2020; 188:4-19. [PMID: 33068741 DOI: 10.1016/j.ymeth.2020.10.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 10/05/2020] [Accepted: 10/07/2020] [Indexed: 12/18/2022] Open
Abstract
State-of-the-art patient management frequently mandates the investigation of both anatomy and physiology of the patients. Hybrid imaging modalities such as the PET/MRI, PET/CT and SPECT/CT have the ability to provide both structural and functional information of the investigated tissues in a single examination. With the introduction of such advanced hardware fusion, new problems arise such as the exceedingly large amount of multi-modality data that requires novel approaches of how to extract a maximum of clinical information from large sets of multi-dimensional imaging data. Artificial intelligence (AI) has emerged as one of the leading technologies that has shown promise in facilitating highly integrative analysis of multi-parametric data. Specifically, the usefulness of AI algorithms in the medical imaging field has been heavily investigated in the realms of (1) image acquisition and reconstruction, (2) post-processing and (3) data mining and modelling. Here, we aim to provide an overview of the challenges encountered in hybrid imaging and discuss how AI algorithms can facilitate potential solutions. In addition, we highlight the pitfalls and challenges in using advanced AI algorithms in the context of hybrid imaging and provide suggestions for building robust AI solutions that enable reproducible and transparent research.
Collapse
Affiliation(s)
- Lalith Kumar Shiyam Sundar
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | | | - Irène Buvat
- Laboratoire d'Imagerie Translationnelle en Oncologie, Inserm, Institut Curie, Orsay, France
| | - Luc Bidaut
- College of Science, University of Lincoln, Lincoln, UK
| | - Thomas Beyer
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|