1
|
Li Z, Gan G, Guo J, Zhan W, Chen L. Accurate object localization facilitates automatic esophagus segmentation in deep learning. Radiat Oncol 2024; 19:55. [PMID: 38735947 PMCID: PMC11088757 DOI: 10.1186/s13014-024-02448-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 05/01/2024] [Indexed: 05/14/2024] Open
Abstract
BACKGROUND Currently, automatic esophagus segmentation remains a challenging task due to its small size, low contrast, and large shape variation. We aimed to improve the performance of esophagus segmentation in deep learning by applying a strategy that involves locating the object first and then performing the segmentation task. METHODS A total of 100 cases with thoracic computed tomography scans from two publicly available datasets were used in this study. A modified CenterNet, an object location network, was employed to locate the center of the esophagus for each slice. Subsequently, the 3D U-net and 2D U-net_coarse models were trained to segment the esophagus based on the predicted object center. A 2D U-net_fine model was trained based on the updated object center according to the 3D U-net model. The dice similarity coefficient and the 95% Hausdorff distance were used as quantitative evaluation indexes for the delineation performance. The characteristics of the automatically delineated esophageal contours by the 2D U-net and 3D U-net models were summarized. Additionally, the impact of the accuracy of object localization on the delineation performance was analyzed. Finally, the delineation performance in different segments of the esophagus was also summarized. RESULTS The mean dice coefficient of the 3D U-net, 2D U-net_coarse, and 2D U-net_fine models were 0.77, 0.81, and 0.82, respectively. The 95% Hausdorff distance for the above models was 6.55, 3.57, and 3.76, respectively. Compared with the 2D U-net, the 3D U-net has a lower incidence of delineating wrong objects and a higher incidence of missing objects. After using the fine object center, the average dice coefficient was improved by 5.5% in the cases with a dice coefficient less than 0.75, while that value was only 0.3% in the cases with a dice coefficient greater than 0.75. The dice coefficients were lower for the esophagus between the orifice of the inferior and the pulmonary bifurcation compared with the other regions. CONCLUSION The 3D U-net model tended to delineate fewer incorrect objects but also miss more objects. Two-stage strategy with accurate object location could enhance the robustness of the segmentation model and significantly improve the esophageal delineation performance, especially for cases with poor delineation results.
Collapse
Affiliation(s)
- Zhibin Li
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Guanghui Gan
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Jian Guo
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Wei Zhan
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Long Chen
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China.
| |
Collapse
|
2
|
Chen Y, Pahlavian SH, Jacobs P, Neupane T, Forghani-Arani F, Castillo E, Castillo R, Vinogradskiy Y. Systematic Evaluation of the Impact of Lung Segmentation Methods on 4-Dimensional Computed Tomography Ventilation Imaging Using a Large Patient Database. Int J Radiat Oncol Biol Phys 2024; 118:242-252. [PMID: 37607642 PMCID: PMC10842520 DOI: 10.1016/j.ijrobp.2023.08.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 08/04/2023] [Accepted: 08/08/2023] [Indexed: 08/24/2023]
Abstract
PURPOSE A novel form of lung functional imaging applied for functional avoidance radiation therapy has been developed that uses 4-dimensional computed tomography (4DCT) data and image processing techniques to calculate lung ventilation (4DCT-ventilation). Lung segmentation is a common step to define a region of interest for 4DCT-ventilation generation. The purpose of this study was to quantitatively evaluate the sensitivity of 4DCT-ventilation imaging using different lung segmentation methods. METHODS AND MATERIALS The 4DCT data of 350 patients from 2 institutions were used. Lung contours were generated using 3 methods: (1) reference segmentations that removed airways and pulmonary vasculature manually (Lung-Manual), (2) standard lung contours used for planning (Lung-RadOnc), and (3) artificial intelligence (AI)-based contours that removed the airways and pulmonary vasculature (Lung-AI). The AI model was based on a residual 3-dimensional U-Net and was trained using the Lung-Manual contours of 279 patients. We compared the Lung-RadOnc or Lung-AI with Lung-Manual contours for the entire 4DCT-ventilation functional avoidance process including lung segmentation (surface Dice similarity coefficient [Surface DSC]), 4DCT-ventilation generation (correlation), and subanalysis of 10 patients on a dosimetric endpoint (percentage of high functional volume of lung receiving ≥20 Gy [fV20{%}]). RESULTS Surface DSC comparing Lung-Manual/Lung-RadOnc and Lung-Manual/Lung-AI contours was 0.40 ± 0.06 and 0.86 ± 0.04, respectively. The correlation between 4DCT-ventilation images generated with Lung-Manual/Lung-RadOnc and Lung-Manual/Lung-AI were 0.48 ± 0.14 and 0.85 ± 0.14, respectively. The difference in fV20[%] between 4DCT-ventilation generated with Lung-Manual/Lung-RadOnc and Lung-Manual/Lung-AI was 2.5% ± 4.1% and 0.3% ± 0.5%, respectively. CONCLUSIONS Our work showed that using standard planning lung contours can result in significantly variable 4DCT-ventilation images. The study demonstrated that AI-based segmentations generate lung contours and 4DCT-ventilation images that are similar to those generated using manual methods. The significance of the study is that it characterizes the lung segmentation sensitivity of the 4DCT-ventilation process and develops methods that can facilitate the integration of this novel imaging in busy clinics.
Collapse
Affiliation(s)
- Yingxuan Chen
- Department of Radiation Oncology, Thomas Jefferson University, Philadelphia, Pennsylvania
| | | | | | - Taindra Neupane
- Department of Radiation Oncology, Thomas Jefferson University, Philadelphia, Pennsylvania
| | | | - Edward Castillo
- Department of Biomedical Engineering, University of Texas at Austin, Austin, Texas
| | - Richard Castillo
- Department of Radiation Oncology, Emory University, Atlanta, Georgia
| | - Yevgeniy Vinogradskiy
- Department of Radiation Oncology, Thomas Jefferson University, Philadelphia, Pennsylvania.
| |
Collapse
|
3
|
Karri M, Annavarapu CSR, Acharya UR. Skin lesion segmentation using two-phase cross-domain transfer learning framework. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107408. [PMID: 36805279 DOI: 10.1016/j.cmpb.2023.107408] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/25/2022] [Revised: 01/31/2023] [Accepted: 02/04/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning (DL) models have been used for medical imaging for a long time but they did not achieve their full potential in the past because of insufficient computing power and scarcity of training data. In recent years, we have seen substantial growth in DL networks because of improved technology and an abundance of data. However, previous studies indicate that even a well-trained DL algorithm may struggle to generalize data from multiple sources because of domain shifts. Additionally, ineffectiveness of basic data fusion methods, complexity of segmentation target and low interpretability of current DL models limit their use in clinical decisions. To meet these challenges, we present a new two-phase cross-domain transfer learning system for effective skin lesion segmentation from dermoscopic images. METHODS Our system is based on two significant technical inventions. We examine a two- phase cross-domain transfer learning approach, including model-level and data-level transfer learning, by fine-tuning the system on two datasets, MoleMap and ImageNet. We then present nSknRSUNet, a high-performing DL network, for skin lesion segmentation using broad receptive fields and spatial edge attention feature fusion. We examine the trained model's generalization capabilities on skin lesion segmentation to quantify these two inventions. We cross-examine the model using two skin lesion image datasets, MoleMap and HAM10000, obtained from varied clinical contexts. RESULTS At data-level transfer learning for the HAM10000 dataset, the proposed model obtained 94.63% of DSC and 99.12% accuracy. In cross-examination at data-level transfer learning for the Molemap dataset, the proposed model obtained 93.63% of DSC and 97.01% of accuracy. CONCLUSION Numerous experiments reveal that our system produces excellent performance and improves upon state-of-the-art methods on both qualitative and quantitative measures.
Collapse
Affiliation(s)
- Meghana Karri
- Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines), Dhanbad, 826004, Jharkhand, India.
| | - Chandra Sekhara Rao Annavarapu
- Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines), Dhanbad, 826004, Jharkhand, India.
| | - U Rajendra Acharya
- Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, 599489, Singapore; Department of Biomedical Engineering, School of science and Technology, SUSS university, Singapore; Department of Biomedical Informatics and Medical Engineering, Asia university, Taichung, Taiwan.
| |
Collapse
|
4
|
Im JH, Lee IJ, Choi Y, Sung J, Ha JS, Lee H. Impact of Denoising on Deep-Learning-Based Automatic Segmentation Framework for Breast Cancer Radiotherapy Planning. Cancers (Basel) 2022; 14:cancers14153581. [PMID: 35892839 PMCID: PMC9332287 DOI: 10.3390/cancers14153581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 07/08/2022] [Accepted: 07/20/2022] [Indexed: 02/04/2023] Open
Abstract
Objective: This study aimed to investigate the segmentation accuracy of organs at risk (OARs) when denoised computed tomography (CT) images are used as input data for a deep-learning-based auto-segmentation framework. Methods: We used non-contrast enhanced planning CT scans from 40 patients with breast cancer. The heart, lungs, esophagus, spinal cord, and liver were manually delineated by two experienced radiation oncologists in a double-blind manner. The denoised CT images were used as input data for the AccuContourTM segmentation software to increase the signal difference between structures of interest and unwanted noise in non-contrast CT. The accuracy of the segmentation was assessed using the Dice similarity coefficient (DSC), and the results were compared with those of conventional deep-learning-based auto-segmentation without denoising. Results: The average DSC outcomes were higher than 0.80 for all OARs except for the esophagus. AccuContourTM-based and denoising-based auto-segmentation demonstrated comparable performance for the lungs and spinal cord but showed limited performance for the esophagus. Denoising-based auto-segmentation for the liver was minimal but had statistically significantly better DSC than AccuContourTM-based auto-segmentation (p < 0.05). Conclusions: Denoising-based auto-segmentation demonstrated satisfactory performance in automatic liver segmentation from non-contrast enhanced CT scans. Further external validation studies with larger cohorts are needed to verify the usefulness of denoising-based auto-segmentation.
Collapse
Affiliation(s)
- Jung Ho Im
- CHA Bundang Medical Center, Department of Radiation Oncology, CHA University School of Medicine, Seongnam 13496, Korea;
| | - Ik Jae Lee
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
| | - Yeonho Choi
- Department of Radiation Oncology, Gangnam Severance Hospital, Seoul 06273, Korea; (Y.C.); (J.S.H.)
| | - Jiwon Sung
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
| | - Jin Sook Ha
- Department of Radiation Oncology, Gangnam Severance Hospital, Seoul 06273, Korea; (Y.C.); (J.S.H.)
| | - Ho Lee
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
- Correspondence: ; Tel.: +82-2-2228-8109; Fax: +82-2-2227-7823
| |
Collapse
|
5
|
Silva F, Pereira T, Neves I, Morgado J, Freitas C, Malafaia M, Sousa J, Fonseca J, Negrão E, Flor de Lima B, Correia da Silva M, Madureira AJ, Ramos I, Costa JL, Hespanhol V, Cunha A, Oliveira HP. Towards Machine Learning-Aided Lung Cancer Clinical Routines: Approaches and Open Challenges. J Pers Med 2022; 12:jpm12030480. [PMID: 35330479 PMCID: PMC8950137 DOI: 10.3390/jpm12030480] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 02/28/2022] [Accepted: 03/10/2022] [Indexed: 12/15/2022] Open
Abstract
Advancements in the development of computer-aided decision (CAD) systems for clinical routines provide unquestionable benefits in connecting human medical expertise with machine intelligence, to achieve better quality healthcare. Considering the large number of incidences and mortality numbers associated with lung cancer, there is a need for the most accurate clinical procedures; thus, the possibility of using artificial intelligence (AI) tools for decision support is becoming a closer reality. At any stage of the lung cancer clinical pathway, specific obstacles are identified and “motivate” the application of innovative AI solutions. This work provides a comprehensive review of the most recent research dedicated toward the development of CAD tools using computed tomography images for lung cancer-related tasks. We discuss the major challenges and provide critical perspectives on future directions. Although we focus on lung cancer in this review, we also provide a more clear definition of the path used to integrate AI in healthcare, emphasizing fundamental research points that are crucial for overcoming current barriers.
Collapse
Affiliation(s)
- Francisco Silva
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FCUP—Faculty of Science, University of Porto, 4169-007 Porto, Portugal
- Correspondence: (F.S.); (T.P.)
| | - Tania Pereira
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- Correspondence: (F.S.); (T.P.)
| | - Inês Neves
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- ICBAS—Abel Salazar Biomedical Sciences Institute, University of Porto, 4050-313 Porto, Portugal
| | - Joana Morgado
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - Cláudia Freitas
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - Mafalda Malafaia
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FEUP—Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
| | - Joana Sousa
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - João Fonseca
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FEUP—Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
| | - Eduardo Negrão
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - Beatriz Flor de Lima
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - Miguel Correia da Silva
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - António J. Madureira
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - Isabel Ramos
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - José Luis Costa
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
- i3S—Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135 Porto, Portugal
- IPATIMUP—Institute of Molecular Pathology and Immunology of the University of Porto, 4200-135 Porto, Portugal
| | - Venceslau Hespanhol
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - António Cunha
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- UTAD—University of Trás-os-Montes and Alto Douro, 5001-801 Vila Real, Portugal
| | - Hélder P. Oliveira
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FCUP—Faculty of Science, University of Porto, 4169-007 Porto, Portugal
| |
Collapse
|
6
|
Effects of group housing and incremental hay supplementation in calf starters at different ages on growth performance, behavior, and health. Sci Rep 2022; 12:3190. [PMID: 35210533 PMCID: PMC8873488 DOI: 10.1038/s41598-022-07210-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 01/24/2022] [Indexed: 11/08/2022] Open
Abstract
The present study examined the effects of age at group housing and age at incremental hay supplementation in calf starters from 7.5 to 15% (dry matter, DM) and their interaction on growth performance, behavior, health of dairy calves, and development of heifers through first breeding. A total of 64 calves (n = 16 calves/treatment, 8 male and 8 female) were randomly assigned to 4 treatments in a 2 × 2 factorial arrangement, with age at group housing (early = d 28 ± 2, EG vs. late = d 70 ± 2, LG; 4 calves per group) and age at incremental hay supplementation of calf starters from 7.5 to 15% of DM (early = d 42 ± 2 d, EH vs. late = d 77 ± 2, LH) as the main factors. All calves (female and male) were weaned at 63 days of age and observed until 90 days of age. Heifer calves were managed uniformly from 90 days of age until first calving to evaluate the long-term effects of treatment. No interactions were observed between age at group housing and age at incremental hay to calves on starter feed intake, performance, calf health and behavior, and heifer development through first breeding, which was contrary to our hypothesis. The age at which incremental hay supplementation was administered had no effect on starter feed intake, growth performance, or heifer development until first calving. When EG calves were compared with LG calves, nutrient intake (starter, total dry matter, metabolizable energy, neutral detergent fiber, starch, and crude protein), average daily gain, and final body weight increased. In addition, frequency of standing decreased and time and frequency of eating increased in EG calves compared to LG calves. Overall, early group housing leads to improved growth performance in dairy calves with no negative effects on calf health compared to late group housing.
Collapse
|
7
|
ThoraxNet: a 3D U-Net based two-stage framework for OAR segmentation on thoracic CT images. Phys Eng Sci Med 2022; 45:189-203. [PMID: 35029804 DOI: 10.1007/s13246-022-01101-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 01/06/2022] [Indexed: 10/19/2022]
Abstract
An important phase of radiation treatment planning is the accurate contouring of the organs at risk (OAR), which is necessary for the dose distribution calculation. The manual contouring approach currently used in clinical practice is tedious, time-consuming, and prone to inter and intra-observer variation. Therefore, a deep learning-based auto contouring tool can solve these issues by accurately delineating OARs on the computed tomography (CT) images. This paper proposes a two-stage deep learning-based segmentation model with an attention mechanism that automatically delineates OARs in thoracic CT images. After preprocessing the input CT volume, a 3D U-Net architecture will locate each organ to generate cropped images for the segmentation network. Next, two differently configured U-Net-based networks will perform the segmentation of large organs-left lung, right lung, heart, and small organs-esophagus and spinal cord, respectively. A post-processing step integrates all the individually-segmented organs to generate the final result. The suggested model outperformed the state-of-the-art approaches in terms of dice similarity coefficient (DSC) values for the lungs and the heart. It is worth mentioning that the proposed model achieved a dice score of 0.941, which is 1.1% higher than the best previous dice score, in the case of the heart, an important organ in the human body. Moreover, the clinical acceptance of the results is verified using dosimetric analysis. To delineate all five organs on a CT scan of size [Formula: see text], our model takes only 8.61 s. The proposed open-source automatic contouring tool can generate accurate contours in minimal time, consequently speeding up the treatment time and reducing the treatment cost.
Collapse
|
8
|
Douglass MJJ, Keal JA. DeepWL: Robust EPID based Winston-Lutz analysis using deep learning, synthetic image generation and optical path-tracing. Phys Med 2021; 89:306-316. [PMID: 34492498 DOI: 10.1016/j.ejmp.2021.08.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 08/03/2021] [Accepted: 08/27/2021] [Indexed: 12/23/2022] Open
Abstract
Radiation therapy requires clinical linear accelerators to be mechanically and dosimetrically calibrated to a high standard. One important quality assurance test is the Winston-Lutz test which localises the radiation isocentre of the linac. In the current work we demonstrate a novel method of analysing EPID based Winston-Lutz QA images using a deep learning model trained only on synthetic image data. In addition, we propose a novel method of generating the synthetic WL images and associated 'ground-truth' masks using an optical path-tracing engine to 'fake' mega-voltage EPID images. The model called DeepWL was trained on 1500 synthetic WL images using data augmentation techniques for 180 epochs. The model was built using Keras with a TensorFlow backend on an Intel Core i5-6500T CPU and trained in approximately 15 h. DeepWL was shown to produce ball bearing and multi-leaf collimator field segmentations with a mean dice coefficient of 0.964 and 0.994 respectively on previously unseen synthetic testing data. When DeepWL was applied to WL data measured on an EPID, the predicted mean displacements were shown to be statistically similar to the Canny Edge detection method. However, the DeepWL predictions for the ball bearing locations were shown to correlate better with manual annotations compared with the Canny edge detection algorithm. DeepWL was demonstrated to analyse Winston-Lutz images with an accuracy suitable for routine linac quality assurance with some statistical evidence that it may outperform Canny Edge detection methods in terms of segmentation robustness and the resultant displacement predictions.
Collapse
Affiliation(s)
- Michael John James Douglass
- School of Physical Sciences, University of Adelaide, Adelaide 5005, South Australia, Australia; Department of Medical Physics, Royal Adelaide Hospital, Adelaide 5000, South Australia, Australia.
| | - James Alan Keal
- School of Physical Sciences, University of Adelaide, Adelaide 5005, South Australia, Australia
| |
Collapse
|
9
|
Venerito V, Angelini O, Cazzato G, Lopalco G, Maiorano E, Cimmino A, Iannone F. A convolutional neural network with transfer learning for automatic discrimination between low and high-grade synovitis: a pilot study. Intern Emerg Med 2021; 16:1457-1465. [PMID: 33387201 DOI: 10.1007/s11739-020-02583-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Accepted: 11/20/2020] [Indexed: 02/05/2023]
Abstract
Ultrasound-guided synovial tissue biopsy (USSB) may allow personalizing the treatment for patients with inflammatory arthritis. To this end, the quantification of tissue inflammation in synovial specimens can be crucial to adopt proper therapeutic strategies. This study aimed at investigating whether computer vision may be of aid in discriminating the grade of synovitis in patients undergoing USSB. We used a database of 150 photomicrographs of synovium from patients who underwent USSB. For each hematoxylin and eosin (H&E)-stained slide, Krenn's score was calculated. After proper data pre-processing and fine-tuning, transfer learning on a ResNet34 convolutional neural network (CNN) was employed to discriminate between low and high-grade synovitis (Krenn's score < 5 or ≥ 5). We computed test phase metrics, accuracy, precision (true positive/actual results), and recall (true positive/predicted results). The Grad-Cam algorithm was used to highlight the regions in the image used by the model for prediction. We analyzed photomicrographs of specimens from 12 patients with arthritis. The training dataset included n.90 images (n.42 with high-grade synovitis). Validation and test datasets included n.30 (n.14 high-grade synovitis) and n.30 items (n.16 with high-grade synovitis). An accuracy of 100% (precision = 1, recall = 1) was scored in the test phase. Cellularity in the synovial lining and sublining layers was the salient determinant of CNN prediction. This study provides a proof of concept that computer vision with transfer learning is suitable for scoring synovitis. Integrating CNN-based approach into real-life patient management may improve the workflow between rheumatologists and pathologists.
Collapse
Affiliation(s)
- Vincenzo Venerito
- Department of Emergency and Organ Transplantations-Rheumatology Unit, University of Bari "Aldo Moro", Bari, Italy
| | - Orazio Angelini
- King's College London, London, UK
- Amazon Research Cambridge, Cambridge, UK
| | - Gerardo Cazzato
- Department of Emergency and Organ Transplantations-Pathology Unit, University of Bari "Aldo Moro", Bari, Italy
| | - Giuseppe Lopalco
- Department of Emergency and Organ Transplantations-Rheumatology Unit, University of Bari "Aldo Moro", Bari, Italy.
| | - Eugenio Maiorano
- Department of Emergency and Organ Transplantations-Pathology Unit, University of Bari "Aldo Moro", Bari, Italy
| | - Antonietta Cimmino
- Department of Emergency and Organ Transplantations-Pathology Unit, University of Bari "Aldo Moro", Bari, Italy
| | - Florenzo Iannone
- Department of Emergency and Organ Transplantations-Rheumatology Unit, University of Bari "Aldo Moro", Bari, Italy
| |
Collapse
|
10
|
Liu J, Dong B, Wang S, Cui H, Fan DP, Ma J, Chen G. COVID-19 lung infection segmentation with a novel two-stage cross-domain transfer learning framework. Med Image Anal 2021; 74:102205. [PMID: 34425317 PMCID: PMC8342869 DOI: 10.1016/j.media.2021.102205] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 06/07/2021] [Accepted: 07/28/2021] [Indexed: 12/11/2022]
Abstract
With the global outbreak of COVID-19 in early 2020, rapid diagnosis of COVID-19 has become the urgent need to control the spread of the epidemic. In clinical settings, lung infection segmentation from computed tomography (CT) images can provide vital information for the quantification and diagnosis of COVID-19. However, accurate infection segmentation is a challenging task due to (i) the low boundary contrast between infections and the surroundings, (ii) large variations of infection regions, and, most importantly, (iii) the shortage of large-scale annotated data. To address these issues, we propose a novel two-stage cross-domain transfer learning framework for the accurate segmentation of COVID-19 lung infections from CT images. Our framework consists of two major technical innovations, including an effective infection segmentation deep learning model, called nCoVSegNet, and a novel two-stage transfer learning strategy. Specifically, our nCoVSegNet conducts effective infection segmentation by taking advantage of attention-aware feature fusion and large receptive fields, aiming to resolve the issues related to low boundary contrast and large infection variations. To alleviate the shortage of the data, the nCoVSegNet is pre-trained using a two-stage cross-domain transfer learning strategy, which makes full use of the knowledge from natural images (i.e., ImageNet) and medical images (i.e., LIDC-IDRI) to boost the final training on CT images with COVID-19 infections. Extensive experiments demonstrate that our framework achieves superior segmentation accuracy and outperforms the cutting-edge models, both quantitatively and qualitatively.
Collapse
Affiliation(s)
- Jiannan Liu
- Department of Computer Science and Technology, Heilongjiang University, Harbin, China
| | - Bo Dong
- Center for Brain Imaging Science and Technology, Zhejiang University, Hangzhou, China
| | - Shuai Wang
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Hui Cui
- Department of Computer Science and Information Technology, La Trobe University, Melbourne, Australia
| | - Deng-Ping Fan
- College of Computer Science, Nankai University, Tianjin, China
| | - Jiquan Ma
- Department of Computer Science and Technology, Heilongjiang University, Harbin, China.
| | - Geng Chen
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China.
| |
Collapse
|
11
|
Yakar M, Etiz D. Artificial intelligence in radiation oncology. Artif Intell Med Imaging 2021; 2:13-31. [DOI: 10.35711/aimi.v2.i2.13] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 03/30/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is a computer science that tries to mimic human-like intelligence in machines that use computer software and algorithms to perform specific tasks without direct human input. Machine learning (ML) is a subunit of AI that uses data-driven algorithms that learn to imitate human behavior based on a previous example or experience. Deep learning is an ML technique that uses deep neural networks to create a model. The growth and sharing of data, increasing computing power, and developments in AI have initiated a transformation in healthcare. Advances in radiation oncology have produced a significant amount of data that must be integrated with computed tomography imaging, dosimetry, and imaging performed before each fraction. Of the many algorithms used in radiation oncology, has advantages and limitations with different computational power requirements. The aim of this review is to summarize the radiotherapy (RT) process in workflow order by identifying specific areas in which quality and efficiency can be improved by ML. The RT stage is divided into seven stages: patient evaluation, simulation, contouring, planning, quality control, treatment application, and patient follow-up. A systematic evaluation of the applicability, limitations, and advantages of AI algorithms has been done for each stage.
Collapse
Affiliation(s)
- Melek Yakar
- Department of Radiation Oncology, Eskisehir Osmangazi University Faculty of Medicine, Eskisehir 26040, Turkey
- Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Eskisehir 26040, Turkey
| | - Durmus Etiz
- Department of Radiation Oncology, Eskisehir Osmangazi University Faculty of Medicine, Eskisehir 26040, Turkey
- Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Eskisehir 26040, Turkey
| |
Collapse
|
12
|
Wang J, Zhu H, Wang SH, Zhang YD. A Review of Deep Learning on Medical Image Analysis. MOBILE NETWORKS AND APPLICATIONS 2021; 26:351-380. [DOI: 10.1007/s11036-020-01672-7] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/20/2020] [Indexed: 08/30/2023]
|