1
|
Erozan A, Lösel PD, Heuveline V, Weinhardt V. Automated 3D cytoplasm segmentation in soft X-ray tomography. iScience 2024; 27:109856. [PMID: 38784019 PMCID: PMC11112332 DOI: 10.1016/j.isci.2024.109856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 03/22/2024] [Accepted: 04/27/2024] [Indexed: 05/25/2024] Open
Abstract
Cells' structure is key to understanding cellular function, diagnostics, and therapy development. Soft X-ray tomography (SXT) is a unique tool to image cellular structure without fixation or labeling at high spatial resolution and throughput. Fast acquisition times increase demand for accelerated image analysis, like segmentation. Currently, segmenting cellular structures is done manually and is a major bottleneck in the SXT data analysis. This paper introduces ACSeg, an automated 3D cytoplasm segmentation model. ACSeg is generated using semi-automated labels and 3D U-Net and is trained on 43 SXT tomograms of immune T cells, rapidly converging to high-accuracy segmentation, therefore reducing time and labor. Furthermore, adding only 6 SXT tomograms of other cell types diversifies the model, showing potential for optimal experimental design. ACSeg successfully segmented unseen tomograms and is published on Biomedisa, enabling high-throughput analysis of cell volume and structure of cytoplasm in diverse cell types.
Collapse
Affiliation(s)
- Ayse Erozan
- Centre for Organismal Studies, Heidelberg University, Heidelberg, Germany
- Engineering Mathematics and Computing Lab, Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany
- Data Mining and Uncertainty Quantification, Heidelberg Institute for Theoretical Studies, Heidelberg, Germany
| | - Philipp D. Lösel
- Engineering Mathematics and Computing Lab, Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany
- Data Mining and Uncertainty Quantification, Heidelberg Institute for Theoretical Studies, Heidelberg, Germany
- Department of Materials Physics Research School of Physics, The Australian National University, Acton ACT, Australia
| | - Vincent Heuveline
- Engineering Mathematics and Computing Lab, Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany
- Data Mining and Uncertainty Quantification, Heidelberg Institute for Theoretical Studies, Heidelberg, Germany
| | - Venera Weinhardt
- Centre for Organismal Studies, Heidelberg University, Heidelberg, Germany
- Molecular Biophysics and Integrated Bioimaging Division, Lawrence Berkeley National Laboratory, Berkeley, CA, USA
| |
Collapse
|
2
|
Ono T, Adachi T, Hirashima H, Iramina H, Kishi N, Matsuo Y, Nakamura M, Mizowaki T. Unifying gamma passing rates in patient-specific QA for VMAT lung cancer treatment based on data assimilation. Phys Eng Sci Med 2024:10.1007/s13246-024-01448-3. [PMID: 38900228 DOI: 10.1007/s13246-024-01448-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 05/21/2024] [Indexed: 06/21/2024]
Abstract
This study aimed to identify systematic errors in measurement-, calculation-, and prediction-based patient-specific quality assurance (PSQA) methods for volumetric modulated arc therapy (VMAT) on lung cancer and to standardize the gamma passing rate (GPR) by considering systematic errors during data assimilation. This study included 150 patients with lung cancer who underwent VMAT. VMAT plans were generated using a collapsed-cone algorithm. For measurement-based PSQA, ArcCHECK was employed. For calculation-based PSQA, Acuros XB was used to recalculate the plans. In prediction-based PSQA, GPR was forecasted using a previously developed GPR prediction model. The representative GPR value was estimated using the least-squares method from the three PSQA methods for each original plan. The unified GPR was computed by adjusting the original GPR to account for systematic errors. The range of limits of agreement (LoA) were assessed for the original and unified GPRs based on the representative GPR using Bland-Altman plots. For GPR (3%/2 mm), original GPRs were 94.4 ± 3.5%, 98.6 ± 2.2% and 93.3 ± 3.4% for measurement-, calculation-, and prediction-based PSQA methods and the representative GPR was 95.5 ± 2.0%. Unified GPRs were 95.3 ± 2.8%, 95.4 ± 3.5% and 95.4 ± 3.1% for measurement-, calculation-, and prediction-based PSQA methods, respectively. The range of LoA decreased from 12.8% for the original GPR to 9.5% for the unified GPR across all three PSQA methods. The study evaluated unified GPRs that corrected for systematic errors. Proposing unified criteria for PSQA can enhance safety regardless of the methods used.
Collapse
Affiliation(s)
- Tomohiro Ono
- Department of Radiation Oncology, Shiga General Hospital, 5-4-30 Moriyama, Moriyama-shi, Shiga, 524-8524, Japan.
- Department of Radiation Oncology and Image-applied Therapy, Kyoto University Graduate School of Medicine, Kyoto, Japan.
| | - Takanori Adachi
- Department of Radiation Oncology and Image-applied Therapy, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Hideaki Hirashima
- Department of Radiation Oncology and Image-applied Therapy, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Hiraku Iramina
- Department of Radiation Oncology and Image-applied Therapy, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Noriko Kishi
- Department of Radiation Oncology and Image-applied Therapy, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Yukinori Matsuo
- Department of Radiation Oncology, Kindai University Faculty of Medicine, Osaka, Japan
| | - Mitsuhiro Nakamura
- Department of Advanced Medical Physics, Graduate School of Medicine, Kyoto University, Kyoto, Japan
| | - Takashi Mizowaki
- Department of Radiation Oncology and Image-applied Therapy, Kyoto University Graduate School of Medicine, Kyoto, Japan
| |
Collapse
|
3
|
Aoyama T, Shimizu H, Koide Y, Kamezawa H, Fukunaga JI, Kitagawa T, Tachibana H, Suzuki K, Kodaira T. Deep Learning-based Lung dose Prediction Using Chest X-ray Images in Non-small Cell Lung Cancer Radiotherapy. J Med Phys 2024; 49:33-40. [PMID: 38828071 PMCID: PMC11141742 DOI: 10.4103/jmp.jmp_122_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 01/29/2024] [Accepted: 01/29/2024] [Indexed: 06/05/2024] Open
Abstract
Purpose This study aimed to develop a deep learning model for the prediction of V20 (the volume of the lung parenchyma that received ≥20 Gy) during intensity-modulated radiation therapy using chest X-ray images. Methods The study utilized 91 chest X-ray images of patients with lung cancer acquired routinely during the admission workup. The prescription dose for the planning target volume was 60 Gy in 30 fractions. A convolutional neural network-based regression model was developed to predict V20. To evaluate model performance, the coefficient of determination (R2), root mean square error (RMSE), and mean absolute error (MAE) were calculated with conducting a four-fold cross-validation method. The patient characteristics of the eligible data were treatment period (2018-2022) and V20 (19.3%; 4.9%-30.7%). Results The predictive results of the developed model for V20 were 0.16, 5.4%, and 4.5% for the R2, RMSE, and MAE, respectively. The median error was -1.8% (range, -13.0% to 9.2%). The Pearson correlation coefficient between the calculated and predicted V20 values was 0.40. As a binary classifier with V20 <20%, the model showed a sensitivity of 75.0%, specificity of 82.6%, diagnostic accuracy of 80.6%, and area under the receiver operator characteristic curve of 0.79. Conclusions The proposed deep learning chest X-ray model can predict V20 and play an important role in the early determination of patient treatment strategies.
Collapse
Affiliation(s)
- Takahiro Aoyama
- Department of Radiation Oncology, Aichi Cancer Center, Nagoya, Japan
| | - Hidetoshi Shimizu
- Department of Radiation Oncology, Aichi Cancer Center, Nagoya, Japan
| | - Yutaro Koide
- Department of Radiation Oncology, Aichi Cancer Center, Nagoya, Japan
| | - Hidemi Kamezawa
- Division of Radiological Sciences, Graduate School of Health Sciences, Teikyo University, Fukuoka, Japan
| | - Jun-Ichi Fukunaga
- Division of Radiology, Department of Medical Technology, Kyushu University Hospital, Fukuoka, Japan
| | - Tomoki Kitagawa
- Department of Radiation Oncology, Aichi Cancer Center, Nagoya, Japan
| | | | - Kojiro Suzuki
- Department of Radiology, Aichi Medical University, Nagakute, Aichi, Japan
| | - Takeshi Kodaira
- Department of Radiation Oncology, Aichi Cancer Center, Nagoya, Japan
| |
Collapse
|
4
|
Arian A, Mehrabi Nejad MM, Zoorpaikar M, Hasanzadeh N, Sotoudeh-Paima S, Kolahi S, Gity M, Soltanian-Zadeh H. Accuracy of artificial intelligence CT quantification in predicting COVID-19 subjects' prognosis. PLoS One 2023; 18:e0294899. [PMID: 38064442 PMCID: PMC10707659 DOI: 10.1371/journal.pone.0294899] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Accepted: 11/11/2023] [Indexed: 12/18/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI)-aided analysis of chest CT expedites the quantification of abnormalities and may facilitate the diagnosis and assessment of the prognosis of subjects with COVID-19. OBJECTIVES This study investigates the performance of an AI-aided quantification model in predicting the clinical outcomes of hospitalized subjects with COVID-19 and compares it with radiologists' performance. SUBJECTS AND METHODS A total of 90 subjects with COVID-19 (men, n = 59 [65.6%]; age, 52.9±16.7 years) were recruited in this cross-sectional study. Quantification of the total and compromised lung parenchyma was performed by two expert radiologists using a volumetric image analysis software and compared against an AI-assisted package consisting of a modified U-Net model for segmenting COVID-19 lesions and an off-the-shelf U-Net model augmented with COVID-19 data for segmenting lung volume. The fraction of compromised lung parenchyma (%CL) was calculated. Based on clinical results, the subjects were divided into two categories: critical (n = 45) and noncritical (n = 45). All admission data were compared between the two groups. RESULTS There was an excellent agreement between the radiologist-obtained and AI-assisted measurements (intraclass correlation coefficient = 0.88, P < 0.001). Both the AI-assisted and radiologist-obtained %CLs were significantly higher in the critical subjects (P = 0.009 and 0.02, respectively) than in the noncritical subjects. In the multivariate logistic regression analysis to distinguish the critical subjects, an AI-assisted %CL ≥35% (odds ratio [OR] = 17.0), oxygen saturation level of <88% (OR = 33.6), immunocompromised condition (OR = 8.1), and other comorbidities (OR = 15.2) independently remained as significant variables in the models. Our proposed model obtained an accuracy of 83.9%, a sensitivity of 79.1%, and a specificity of 88.6% in predicting critical outcomes. CONCLUSIONS AI-assisted measurements are similar to quantitative radiologist-obtained measurements in determining lung involvement in COVID-19 subjects.
Collapse
Affiliation(s)
- Arvin Arian
- Department of Radiology, School of Medicine, Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Imam Khomeini Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Mohammad-Mehdi Mehrabi Nejad
- Department of Radiology, School of Medicine, Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Imam Khomeini Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Mostafa Zoorpaikar
- Department of Radiology, School of Medicine, Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Imam Khomeini Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Navid Hasanzadeh
- Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
| | - Saman Sotoudeh-Paima
- Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
| | - Shahriar Kolahi
- Department of Radiology, School of Medicine, Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Imam Khomeini Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Masoumeh Gity
- Department of Radiology, School of Medicine, Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Imam Khomeini Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Hamid Soltanian-Zadeh
- Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
| |
Collapse
|
5
|
Song Y, Yang H, Ge Z, Du H, Li G. Age estimation based on 3D pulp segmentation of first molars from CBCT images using U-Net. Dentomaxillofac Radiol 2023; 52:20230177. [PMID: 37427595 PMCID: PMC10552131 DOI: 10.1259/dmfr.20230177] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 05/17/2023] [Accepted: 06/02/2023] [Indexed: 07/11/2023] Open
Abstract
OBJECTIVE To train a U-Net model to segment the intact pulp cavity of first molars and establish a reliable mathematical model for age estimation. METHODS We trained a U-Net model by 20 sets of cone-beam CT images and this model was able to segment the intact pulp cavity of first molars. Utilizing this model, 239 maxillary first molars and 234 mandibular first molars from 142 males and 135 females aged 15-69 years old were segmented and the intact pulp cavity volumes were calculated, followed by logarithmic regression analysis to establish the mathematical model with age as the dependent variable and pulp cavity volume as the independent variable. Another 256 first molars were collected to estimate ages with the established model. Mean absolute error and root mean square error between the actual and the estimated ages were used to assess the precision and accuracy of the model. RESULTS The dice similarity coefficient of the U-Net model was 95.6%. The established age estimation model was [Formula: see text] (V is the intact pulp cavity volume of the first molars). The coefficient of determination (R2), mean absolute error and root mean square error were 0.662, 6.72 years, and 8.26 years, respectively. CONCLUSION The trained U-Net model can accurately segment pulp cavity of the first molars from three-dimensional cone-beam CT images. The segmented pulp cavity volumes could be used to estimate the human ages with reasonable precision and accuracy.
Collapse
Affiliation(s)
- Yangjing Song
- Department of Oral and Maxillofacial Radiology, Peking University School and Hospital of Stomatology; National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, Beijing, China
| | - Huifang Yang
- Center of Digital Dentistry, Peking University School and Hospital of Stomatology & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing, China
| | - Zhipu Ge
- Department of Radiology, Qingdao Stomatological Hospital Affiliated to Qingdao University, Qingdao, Shandong Province, China
| | - Han Du
- Shanghai Stomatological Hospital & School of Stomatology, Fudan University & Shanghai Key Laboratory of Craniomaxillofacial Development and Diseases, Fudan University, Shanghai, China
| | - Gang Li
- Department of Oral and Maxillofacial Radiology, Peking University School and Hospital of Stomatology; National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, Beijing, China
| |
Collapse
|
6
|
Chou HH, Lin JY, Shen GT, Huang CY. Validation of an Automated Cardiothoracic Ratio Calculation for Hemodialysis Patients. Diagnostics (Basel) 2023; 13:diagnostics13081376. [PMID: 37189477 DOI: 10.3390/diagnostics13081376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 04/05/2023] [Accepted: 04/07/2023] [Indexed: 05/17/2023] Open
Abstract
Cardiomegaly is associated with poor clinical outcomes and is assessed by routine monitoring of the cardiothoracic ratio (CTR) from chest X-rays (CXRs). Judgment of the margins of the heart and lungs is subjective and may vary between different operators. METHODS Patients aged > 19 years in our hemodialysis unit from March 2021 to October 2021 were enrolled. The borders of the lungs and heart on CXRs were labeled by two nephrologists as the ground truth (nephrologist-defined mask). We implemented AlbuNet-34, a U-Net variant, to predict the heart and lung margins from CXR images and to automatically calculate the CTRs. RESULTS The coefficient of determination (R2) obtained using the neural network model was 0.96, compared with an R2 of 0.90 obtained by nurse practitioners. The mean difference between the CTRs calculated by the nurse practitioners and senior nephrologists was 1.52 ± 1.46%, and that between the neural network model and the nephrologists was 0.83 ± 0.87% (p < 0.001). The mean CTR calculation duration was 85 s using the manual method and less than 2 s using the automated method (p < 0.001). CONCLUSIONS Our study confirmed the validity of automated CTR calculations. By achieving high accuracy and saving time, our model can be implemented in clinical practice.
Collapse
Affiliation(s)
- Hsin-Hsu Chou
- Department of Pediatrics, Ditmanson Medical Foundation Chia-Yi Christian Hospital, Chiayi 600566, Taiwan
- Department of Bioinformatics and Medical Engineering, Asia University, Taichung 413305, Taiwan
| | - Jin-Yi Lin
- Innovation and Incubation Center, Ditmanson Medical Foundation Chia-Yi Christian Hospital, Chiayi 600566, Taiwan
| | - Guan-Ting Shen
- Innovation and Incubation Center, Ditmanson Medical Foundation Chia-Yi Christian Hospital, Chiayi 600566, Taiwan
| | - Chih-Yuan Huang
- Division of Nephrology, Department of Internal Medicine, Ditmanson Medical Foundation Chia-Yi Christian Hospital, Chiayi 600566, Taiwan
- Department of Sport Management, College of Recreation and Health Management, Chia Nan University of Pharmacy and Science, Tainan 717301, Taiwan
| |
Collapse
|
7
|
Özcan F, Uçan ON, Karaçam S, Tunçman D. Fully Automatic Liver and Tumor Segmentation from CT Image Using an AIM-Unet. Bioengineering (Basel) 2023; 10:bioengineering10020215. [PMID: 36829709 PMCID: PMC9951904 DOI: 10.3390/bioengineering10020215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 01/26/2023] [Accepted: 02/01/2023] [Indexed: 02/10/2023] Open
Abstract
The segmentation of the liver is a difficult process due to the changes in shape, border, and density that occur in each section in computed tomography (CT) images. In this study, the Adding Inception Module-Unet (AIM-Unet) model, which is a hybridization of convolutional neural networks-based Unet and Inception models, is proposed for computer-assisted automatic segmentation of the liver and liver tumors from CT scans of the abdomen. Experimental studies were carried out on four different liver CT image datasets, one of which was prepared for this study and three of which were open (CHAOS, LIST, and 3DIRCADb). The results obtained using the proposed method and the segmentation results marked by the specialist were compared with the Dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), and accuracy (ACC) measurement parameters. In this study, we obtained the best DSC, JSC, and ACC liver segmentation performance metrics on the CHAOS dataset as 97.86%, 96.10%, and 99.75%, respectively, of the AIM-Unet model we propose, which is trained separately on three datasets (LiST, CHAOS, and our dataset) containing liver images. Additionally, 75.6% and 65.5% of the DSC tumor segmentation metrics were calculated on the proposed model LiST and 3DIRCADb datasets, respectively. In addition, the segmentation success results on the datasets with the AIM-Unet model were compared with the previous studies. With these results, it has been seen that the method proposed in this study can be used as an auxiliary tool in the decision-making processes of physicians for liver segmentation and detection of liver tumors. This study is useful for medical images, and the developed model can be easily developed for applications in different organs and other medical fields.
Collapse
Affiliation(s)
- Fırat Özcan
- Department of Mechatronics Engineering, Faculty of Technology, Kayalı Campus, Kırklareli University, 39100 Kırklareli, Turkey
- Correspondence:
| | - Osman Nuri Uçan
- Faculty of Applied Sciences, Altınbaş University, Mahmutbey Dilmenler str., 26, 34217 Istanbul, Turkey
| | - Songül Karaçam
- Departman of Radiation Oncology, Cerrahpaşa Medical School, Cerrahpaşa Campus, İstanbul University-Cerrahpaşa, 34098 Istanbul, Turkey
| | - Duygu Tunçman
- Radiotherapy Program, Vocational School of Health Services, Sultangazi Campus, İstanbul University-Cerrahpaşa, 34265 Istanbul, Turkey
| |
Collapse
|
8
|
Huang X, Wang J, Li Z. 3D carotid artery segmentation using shape-constrained active contours. Comput Biol Med 2023; 153:106530. [PMID: 36610215 DOI: 10.1016/j.compbiomed.2022.106530] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 12/12/2022] [Accepted: 12/31/2022] [Indexed: 01/04/2023]
Abstract
Reconstruction of the carotid artery is demanded in the detection and characterization of atherosclerosis. This study proposes a shape-constrained active contour model for segmenting the carotid artery from MR images, which embeds the output of the deep learning network into the active contour. First the centerline of the carotid artery is localized and then modified active contour initialized from the centerline is used to extract the vessel lumen, finally the probability atlas generated by the deep learning network in polar representation domain is integrated into the active contour as a prior information to detect the outer wall. The results showed that the proposed active contour model was efficient and comparable to manual segmentation.
Collapse
Affiliation(s)
- Xianjue Huang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096, China
| | - Jun Wang
- First Affiliated Hospital, Nanjing Medical University, Nanjing, 210029, China
| | - Zhiyong Li
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, 210096, China; School of Mechanical, Medical and Process Engineering, Queensland University of Technology, Brisbane, 4000, Australia; Faculty of Sports Science, Ningbo University, Ningbo, 315211, China.
| |
Collapse
|
9
|
Garcea F, Serra A, Lamberti F, Morra L. Data augmentation for medical imaging: A systematic literature review. Comput Biol Med 2023; 152:106391. [PMID: 36549032 DOI: 10.1016/j.compbiomed.2022.106391] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 11/22/2022] [Accepted: 11/29/2022] [Indexed: 12/13/2022]
Abstract
Recent advances in Deep Learning have largely benefited from larger and more diverse training sets. However, collecting large datasets for medical imaging is still a challenge due to privacy concerns and labeling costs. Data augmentation makes it possible to greatly expand the amount and variety of data available for training without actually collecting new samples. Data augmentation techniques range from simple yet surprisingly effective transformations such as cropping, padding, and flipping, to complex generative models. Depending on the nature of the input and the visual task, different data augmentation strategies are likely to perform differently. For this reason, it is conceivable that medical imaging requires specific augmentation strategies that generate plausible data samples and enable effective regularization of deep neural networks. Data augmentation can also be used to augment specific classes that are underrepresented in the training set, e.g., to generate artificial lesions. The goal of this systematic literature review is to investigate which data augmentation strategies are used in the medical domain and how they affect the performance of clinical tasks such as classification, segmentation, and lesion detection. To this end, a comprehensive analysis of more than 300 articles published in recent years (2018-2022) was conducted. The results highlight the effectiveness of data augmentation across organs, modalities, tasks, and dataset sizes, and suggest potential avenues for future research.
Collapse
Affiliation(s)
- Fabio Garcea
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy
| | - Alessio Serra
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy
| | - Fabrizio Lamberti
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy
| | - Lia Morra
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy.
| |
Collapse
|
10
|
Liu Y, Gargesha M, Scott B, Tchilibou Wane AO, Wilson DL. Deep learning multi-organ segmentation for whole mouse cryo-images including a comparison of 2D and 3D deep networks. Sci Rep 2022; 12:15161. [PMID: 36071089 PMCID: PMC9452525 DOI: 10.1038/s41598-022-19037-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Accepted: 08/23/2022] [Indexed: 11/25/2022] Open
Abstract
Cryo-imaging provided 3D whole-mouse microscopic color anatomy and fluorescence images that enables biotechnology applications (e.g., stem cells and metastatic cancer). In this report, we compared three methods of organ segmentation: 2D U-Net with 2D-slices and 3D U-Net with either 3D-whole-mouse or 3D-patches. We evaluated the brain, thymus, lung, heart, liver, stomach, spleen, left and right kidney, and bladder. Training with 63 mice, 2D-slices had the best performance, with median Dice scores of > 0.9 and median Hausdorff distances of < 1.2 mm in eightfold cross validation for all organs, except bladder, which is a problem organ due to variable filling and poor contrast. Results were comparable to those for a second analyst on the same data. Regression analyses were performed to fit learning curves, which showed that 2D-slices can succeed with fewer samples. Review and editing of 2D-slices segmentation results reduced human operator time from ~ 2-h to ~ 25-min, with reduced inter-observer variability. As demonstrations, we used organ segmentation to evaluate size changes in liver disease and to quantify the distribution of therapeutic mesenchymal stem cells in organs. With a 48-GB GPU, we determined that extra GPU RAM improved the performance of 3D deep learning because we could train at a higher resolution.
Collapse
Affiliation(s)
- Yiqiao Liu
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA
| | | | - Bryan Scott
- BioInVision Inc, Suite E 781 Beta Drive, Cleveland, OH, 44143, USA
| | - Arthure Olivia Tchilibou Wane
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA
| | - David L Wilson
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA. .,BioInVision Inc, Suite E 781 Beta Drive, Cleveland, OH, 44143, USA. .,Department of Radiology, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.
| |
Collapse
|
11
|
Yoo TK, Kim BY, Jeong HK, Kim HK, Yang D, Ryu IH. Simple Code Implementation for Deep Learning-Based Segmentation to Evaluate Central Serous Chorioretinopathy in Fundus Photography. Transl Vis Sci Technol 2022; 11:22. [PMID: 35147661 PMCID: PMC8842634 DOI: 10.1167/tvst.11.2.22] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Purpose Central serous chorioretinopathy (CSC) is a retinal disease that frequently shows resolution and recurrence with serous detachment of the neurosensory retina. Here, we present a deep learning analysis of subretinal fluid (SRF) lesion segmentation in fundus photographs to evaluate CSC. Methods We collected 194 fundus photographs of SRF lesions from the patients with CSC. Three graders manually annotated of the entire SRF area in the retinal images. The dataset was randomly separated into training (90%) and validation (10%) datasets. We used the U-Net segmentation model based on conditional generative adversarial networks (pix2pix) to detect the SRF lesions. The algorithms were trained and validated using Google Colaboratory. Researchers did not need prior knowledge of coding skills or computing resources to implement this code. Results The validation results showed that the Jaccard index and Dice coefficient scores were 0.619 and 0.763, respectively. In most cases, the segmentation results overlapped with most of the reference areas in the annotated images. However, cases with exceptional SRFs were not accurate in terms of prediction. Using Colaboratory, the proposed segmentation task ran easily in a web-based environment without setup or personal computing resources. Conclusions The results suggest that the deep learning model based on U-Net from the pix2pix algorithm is suitable for the automatic segmentation of SRF lesions to evaluate CSC. Translational Relevance Our code implementation has the potential to facilitate ophthalmology research; in particular, deep learning–based segmentation can assist in the development of pathological lesion detection solutions.
Collapse
Affiliation(s)
- Tae Keun Yoo
- Department of Ophthalmology, Aerospace Medical Center, Korea Air Force, Cheongju, South Korea.,B&VIIT Eye Center, Seoul, South Korea
| | - Bo Yi Kim
- Department of Ophthalmology, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Hyun Kyo Jeong
- Department of Ophthalmology, 10 th Fighter Wing, Republic of Korea Air Force, Suwon, South Korea
| | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, South Korea
| | - Donghyun Yang
- Medical Research Center, Aerospace Medical Center, Republic of Korea Air Force, Cheongju, South Korea
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea.,Visuworks, Seoul, South Korea
| |
Collapse
|