1
|
Din S, Shoaib M, Serpedin E. CXR-Seg: A Novel Deep Learning Network for Lung Segmentation from Chest X-Ray Images. Bioengineering (Basel) 2025; 12:167. [PMID: 40001687 PMCID: PMC11851456 DOI: 10.3390/bioengineering12020167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2025] [Revised: 01/26/2025] [Accepted: 02/07/2025] [Indexed: 02/27/2025] Open
Abstract
Over the past decade, deep learning techniques, particularly neural networks, have become essential in medical imaging for tasks like image detection, classification, and segmentation. These methods have greatly enhanced diagnostic accuracy, enabling quicker identification and more effective treatments. In chest X-ray analysis, however, challenges remain in accurately segmenting and classifying organs such as the lungs, heart, diaphragm, sternum, and clavicles, as well as detecting abnormalities in the thoracic cavity. Despite progress, these issues highlight the need for improved approaches to overcome segmentation difficulties and enhance diagnostic reliability. In this context, we propose a novel architecture named CXR-Seg, tailored for semantic segmentation of lungs from chest X-ray images. The proposed network mainly consists of four components, including a pre-trained EfficientNet as an encoder to extract feature encodings, a spatial enhancement module embedded in the skip connection to promote the adjacent feature fusion, a transformer attention module at the bottleneck layer, and a multi-scale feature fusion block at the decoder. The performance of the proposed CRX-Seg was evaluated on four publicly available datasets (MC, Darwin, and Shenzhen for chest X-rays, and TCIA for brain flair segmentation from MRI images). The proposed method achieved a Jaccard index, Dice coefficient, accuracy, sensitivity, and specificity of 95.63%, 97.76%, 98.77%, 98.00%, and 99.05%on MC; 91.66%, 95.62%, 96.35%, 95.53%, and 96.94% on V7 Darwin COVID-19; and 92.97%, 96.32%, 96.69%, 96.01%, and 97.40% on the Shenzhen Tuberculosis CXR Dataset, respectively. Conclusively, the proposed network offers improved performance in comparison with state-of-the-art methods, and better generalization for the semantic segmentation of lungs from chest X-ray images.
Collapse
Affiliation(s)
- Sadia Din
- Electrical and Computer Engineering Program, Texas A&M University, Doha 23874, Qatar
| | - Muhammad Shoaib
- Department of Electrical and Computer Engineering, Abbottabad Campus, COMSATS University Islamabad, Abbottabad 22060, Pakistan
| | - Erchin Serpedin
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77840, USA
| |
Collapse
|
2
|
Shahbazi AS, Irandoost F, Mahdavian R, Shojaeilangari S, Allahvardi A, Naderi-Manesh H. A multi-stage weakly supervised design for spheroid segmentation to explore mesenchymal stem cell differentiation dynamics. BMC Bioinformatics 2025; 26:20. [PMID: 39825265 PMCID: PMC11742216 DOI: 10.1186/s12859-024-06031-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Accepted: 12/27/2024] [Indexed: 01/20/2025] Open
Abstract
There is a growing interest in utilizing 3D culture models for stem cell and cancer cell research due to their closer resemblance to in vivo environments. In this study, human mesenchymal stem cells (MSCs) were cultured using adipocytes and osteocytes as differentiative mediums on varying concentrations of chitosan substrate. Light microscopy was employed to capture cell images from the first day to the 21st day of differentiation. Accurate image segmentation is crucial for analyzing the morphological features of the spheroids during the experimental period and for understanding MSC differentiation dynamics for therapeutic applications. Therefore, we developed an innovative, weakly supervised model, aided by convolutional neural networks, to perform label-free spheroid segmentation. Since obtaining pixel-level ground truth labels through manual annotation is labor-intensive, our approach improves the overall quality of the ground-truth map by incorporating a multi-stage process within a weakly supervised learning framework. Additionally, we developed a robust learning scheme for spheroid detection, providing a reliable foundation to study MSC differentiation dynamics. The proposed framework was systematically evaluated using low-resolution microscopic data and challenging, noisy backgrounds. The experimental results demonstrate the effectiveness of our segmentation approach in accurately separating the spheroid from the background. Furthermore, it achieves performance comparable to fully supervised state-of-the-art approaches. To quantitatively evaluate our algorithm, extensive experiments were conducted using available annotated data, confirming the reliability and robustness of our method. Our computationally extracted features can confirm the experimental results regarding alterations in MSC viability, attachment, and differentiation dynamics among the substrates with three concentrations of chitosan used. We observed the formation of more compact spheroids with higher solidity and convex area, resulting improved cell attachment and viability on the 2% chitosan substrate. Additionally, this substrate exhibited a higher propensity for differentiation into osteocytes, as evidenced by the formation of smaller and more ellipsoid spheroids. "Chitosan biofilms mimic in vivo environments for stem cell culture, advancing therapeutic and fundamental applications.” "Innovative weakly supervised model enables label-free spheroid segmentation in stem cell differentiation studies.” "Robust learning scheme achieves accurate spheroid separation, comparable to state-of-the-art approaches.”
Collapse
Affiliation(s)
- Arash Shahbazpoor Shahbazi
- Department of Biophysics, Faculty of Biological Sciences, Tarbiat Modares University, Tehran, 14115-111, Iran
| | - Farzin Irandoost
- Department of Physics, Shahid Beheshti University (SBU Physics), Tehran, Iran
| | - Reza Mahdavian
- Department of Biophysics, Faculty of Biological Sciences, Tarbiat Modares University, Tehran, 14115-111, Iran
| | - Seyedehsamaneh Shojaeilangari
- Biomedical Engineering Group, Department of Electrical and Information Technology, Iranian Research Organization for Science and Technology (IROST), Tehran, 33535111, Iran.
| | - Abdollah Allahvardi
- Department of Biophysics, Faculty of Biological Sciences, Tarbiat Modares University, Tehran, 14115-111, Iran
| | - Hossein Naderi-Manesh
- Department of Biophysics, Faculty of Biological Sciences, Tarbiat Modares University, Tehran, 14115-111, Iran.
| |
Collapse
|
3
|
Alam MS, Wang D, Arzhaeva Y, Ende JA, Kao J, Silverstone L, Yates D, Salvado O, Sowmya A. Attention-based multi-residual network for lung segmentation in diseased lungs with custom data augmentation. Sci Rep 2024; 14:28983. [PMID: 39578613 PMCID: PMC11584877 DOI: 10.1038/s41598-024-79494-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Accepted: 11/11/2024] [Indexed: 11/24/2024] Open
Abstract
Lung disease analysis in chest X-rays (CXR) using deep learning presents significant challenges due to the wide variation in lung appearance caused by disease progression and differing X-ray settings. While deep learning models have shown remarkable success in segmenting lungs from CXR images with normal or mildly abnormal findings, their performance declines when faced with complex structures, such as pulmonary opacifications. In this study, we propose AMRU++, an attention-based multi-residual UNet++ network designed for robust and accurate lung segmentation in CXR images with both normal and severe abnormalities. The model incorporates attention modules to capture relevant spatial information and multi-residual blocks to extract rich contextual and discriminative features of lung regions. To further enhance segmentation performance, we introduce a data augmentation technique that simulates the features and characteristics of CXR pathologies, addressing the issue of limited annotated data. Extensive experiments on public and private datasets comprising 350 cases of pneumoconiosis, COVID-19, and tuberculosis validate the effectiveness of our proposed framework and data augmentation technique.
Collapse
Affiliation(s)
- Md Shariful Alam
- School of Computer Science and Engineering, University of New South Wales, Sydney, Australia.
| | | | | | - Jesse Alexander Ende
- Department of Radiology, St Vincent's Hospital Sydney, Darlinghurst, NSW, 2010, Australia
| | - Joanna Kao
- Department of Radiology, St Vincent's Hospital Sydney, Darlinghurst, NSW, 2010, Australia
| | - Liz Silverstone
- Department of Radiology, St Vincent's Hospital Sydney, Darlinghurst, NSW, 2010, Australia
| | - Deborah Yates
- Department of Thoracic Medicine, St Vincent's Hospital Sydney, Darlinghurst, NSW, 2010, Australia
| | - Olivier Salvado
- School of Electrical Engineering & Robotics, Queensland University of Technology, Brisbane, QLD, 4001, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
| |
Collapse
|
4
|
Rendon-Atehortua JC, Cardenas-Pena D, Daza-Santacoloma G, Orozco-Gutierrez AA, Jaramillo-Robledo O. Efficient Lung Segmentation from Chest Radiographs using Transfer Learning and Lightweight Deep Architecture. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-5. [PMID: 40039676 DOI: 10.1109/embc53108.2024.10782198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Lung delineation constitutes a critical preprocessing stage for X-ray-based diagnosis and follow-up. However, automatic lung segmentation from chest radiographs (CXR) poses a challenging problem due to anatomical structures' varying shapes and sizes, the differences between radio-opacity, contrast, and image quality, and the requirement of complex models for automatic detection of regions of interest. This work proposes the automated lung segmentation methodology DenseCX, based on U-Net architectures and transfer learning techniques. Unlike other U-Net networks, DenseCX includes an encoder built from Dense blocks, promoting a meaningful feature extraction with lightweight layers. Then, a homogeneous domain adaptation transfers the knowledge from classifying a large cohort of CXR to the DenseCX, reducing the overfitting risk due to the lack of manually labeled images. The experimental setup evaluates the proposed methodology on three public datasets, namely Shenzhen Hospital Chest X-ray, the Japan Society of Radiological Technology, and Montgomery County Chest X-ray, in a leave-one-group-out validation strategy for warranting the generalization. The attained Dice, Sensitivity, and Specificity metrics evidence that DenseCX outperforms other conventional ImageNet initialization while providing the best trade-off between performance and model complexity than state-of-the-art approaches, with a much lighter architecture and an improved convergence.
Collapse
|
5
|
Mo Y, Liu F, Yang G, Wang S, Zheng J, Wu F, Papież BW, McIlwraith D, He T, Guo Y. Labelling with dynamics: A data-efficient learning paradigm for medical image segmentation. Med Image Anal 2024; 95:103196. [PMID: 38781755 DOI: 10.1016/j.media.2024.103196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 02/20/2024] [Accepted: 05/02/2024] [Indexed: 05/25/2024]
Abstract
The success of deep learning on image classification and recognition tasks has led to new applications in diverse contexts, including the field of medical imaging. However, two properties of deep neural networks (DNNs) may limit their future use in medical applications. The first is that DNNs require a large amount of labeled training data, and the second is that the deep learning-based models lack interpretability. In this paper, we propose and investigate a data-efficient framework for the task of general medical image segmentation. We address the two aforementioned challenges by introducing domain knowledge in the form of a strong prior into a deep learning framework. This prior is expressed by a customized dynamical system. We performed experiments on two different datasets, namely JSRT and ISIC2016 (heart and lungs segmentation on chest X-ray images and skin lesion segmentation on dermoscopy images). We have achieved competitive results using the same amount of training data compared to the state-of-the-art methods. More importantly, we demonstrate that our framework is extremely data-efficient, and it can achieve reliable results using extremely limited training data. Furthermore, the proposed method is rotationally invariant and insensitive to initialization.
Collapse
Affiliation(s)
- Yuanhan Mo
- Big Data Institute, University of Oxford, UK; Data Science Institute, Imperial College London, UK
| | - Fangde Liu
- Data Science Institute, Imperial College London, UK
| | - Guang Yang
- Department of Bioengineering and Imperial-X, Imperial College London, UK
| | - Shuo Wang
- Data Science Institute, Imperial College London, UK
| | - Jianqing Zheng
- Chinese Academy for Medical Sciences Oxford Institute, Nuffield Department of Medicine, University of Oxford, UK
| | - Fuping Wu
- Big Data Institute, University of Oxford, UK
| | | | | | | | - Yike Guo
- Data Science Institute, Imperial College London, UK; Hong Kong University of Science and Technology, Hong Kong.
| |
Collapse
|
6
|
Huang W, Ong WC, Wong MKF, Ng EYK, Koh T, Chandramouli C, Ng CT, Hummel Y, Huang F, Lam CSP, Tromp J. Applying the UTAUT2 framework to patients' attitudes toward healthcare task shifting with artificial intelligence. BMC Health Serv Res 2024; 24:455. [PMID: 38605373 PMCID: PMC11007870 DOI: 10.1186/s12913-024-10861-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 03/13/2024] [Indexed: 04/13/2024] Open
Abstract
BACKGROUND Increasing patient loads, healthcare inflation and ageing population have put pressure on the healthcare system. Artificial intelligence and machine learning innovations can aid in task shifting to help healthcare systems remain efficient and cost effective. To gain an understanding of patients' acceptance toward such task shifting with the aid of AI, this study adapted the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2), looking at performance and effort expectancy, facilitating conditions, social influence, hedonic motivation and behavioural intention. METHODS This was a cross-sectional study which took place between September 2021 to June 2022 at the National Heart Centre, Singapore. One hundred patients, aged ≥ 21 years with at least one heart failure symptom (pedal oedema, New York Heart Association II-III effort limitation, orthopnoea, breathlessness), who presented to the cardiac imaging laboratory for physician-ordered clinical echocardiogram, underwent both echocardiogram by skilled sonographers and the experience of echocardiogram by a novice guided by AI technologies. They were then given a survey which looked at the above-mentioned constructs using the UTAUT2 framework. RESULTS Significant, direct, and positive effects of all constructs on the behavioral intention of accepting the AI-novice combination were found. Facilitating conditions, hedonic motivation and performance expectancy were the top 3 constructs. The analysis of the moderating variables, age, gender and education levels, found no impact on behavioral intention. CONCLUSIONS These results are important for stakeholders and changemakers such as policymakers, governments, physicians, and insurance companies, as they design adoption strategies to ensure successful patient engagement by focusing on factors affecting the facilitating conditions, hedonic motivation and performance expectancy for AI technologies used in healthcare task shifting.
Collapse
Affiliation(s)
- Weiting Huang
- National Heart Centre Singapore, 5 Hospital Drive, Singapore, 169609, Singapore.
- Duke-NUS Medical School, Singapore, Singapore.
| | - Wen Chong Ong
- National Healthcare Group Polyclinics, Singapore, Singapore
| | - Mark Kei Fong Wong
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore, Singapore
| | - Eddie Yin Kwee Ng
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore, Singapore
| | - Tracy Koh
- National Heart Centre Singapore, 5 Hospital Drive, Singapore, 169609, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Chanchal Chandramouli
- National Heart Centre Singapore, 5 Hospital Drive, Singapore, 169609, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Choon Ta Ng
- National Heart Centre Singapore, 5 Hospital Drive, Singapore, 169609, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | | | | | - Carolyn Su Ping Lam
- National Heart Centre Singapore, 5 Hospital Drive, Singapore, 169609, Singapore
- Duke-NUS Medical School, Singapore, Singapore
- , Us2.ai, Singapore, Singapore
| | - Jasper Tromp
- Duke-NUS Medical School, Singapore, Singapore
- Saw Swee Hock School of Public Health, National University of Singapore, National University Health System Singapore, Singapore, Singapore
| |
Collapse
|
7
|
Yu F, Moehring A, Banerjee O, Salz T, Agarwal N, Rajpurkar P. Heterogeneity and predictors of the effects of AI assistance on radiologists. Nat Med 2024; 30:837-849. [PMID: 38504016 PMCID: PMC10957478 DOI: 10.1038/s41591-024-02850-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 02/01/2024] [Indexed: 03/21/2024]
Abstract
The integration of artificial intelligence (AI) in medical image interpretation requires effective collaboration between clinicians and AI algorithms. Although previous studies demonstrated the potential of AI assistance in improving overall clinician performance, the individual impact on clinicians remains unclear. This large-scale study examined the heterogeneous effects of AI assistance on 140 radiologists across 15 chest X-ray diagnostic tasks and identified predictors of these effects. Surprisingly, conventional experience-based factors, such as years of experience, subspecialty and familiarity with AI tools, fail to reliably predict the impact of AI assistance. Additionally, lower-performing radiologists do not consistently benefit more from AI assistance, challenging prevailing assumptions. Instead, we found that the occurrence of AI errors strongly influences treatment outcomes, with inaccurate AI predictions adversely affecting radiologist performance on the aggregate of all pathologies and on half of the individual pathologies investigated. Our findings highlight the importance of personalized approaches to clinician-AI collaboration and the importance of accurate AI models. By understanding the factors that shape the effectiveness of AI assistance, this study provides valuable insights for targeted implementation of AI, enabling maximum benefits for individual clinicians in clinical practice.
Collapse
Affiliation(s)
- Feiyang Yu
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Alex Moehring
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Oishi Banerjee
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Tobias Salz
- Department of Economics, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Nikhil Agarwal
- Department of Economics, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Pranav Rajpurkar
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
8
|
Chen Y, Mo Y, Readie A, Ligozio G, Mandal I, Jabbar F, Coroller T, Papież BW. VertXNet: an ensemble method for vertebral body segmentation and identification from cervical and lumbar spinal X-rays. Sci Rep 2024; 14:3341. [PMID: 38336974 PMCID: PMC10858234 DOI: 10.1038/s41598-023-49923-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 12/13/2023] [Indexed: 02/12/2024] Open
Abstract
Accurate annotation of vertebral bodies is crucial for automating the analysis of spinal X-ray images. However, manual annotation of these structures is a laborious and costly process due to their complex nature, including small sizes and varying shapes. To address this challenge and expedite the annotation process, we propose an ensemble pipeline called VertXNet. This pipeline currently combines two segmentation mechanisms, semantic segmentation using U-Net, and instance segmentation using Mask R-CNN, to automatically segment and label vertebral bodies in lateral cervical and lumbar spinal X-ray images. VertXNet enhances its effectiveness by adopting a rule-based strategy (termed the ensemble rule) for effectively combining segmentation outcomes from U-Net and Mask R-CNN. It determines vertebral body labels by recognizing specific reference vertebral instances, such as cervical vertebra 2 ('C2') in cervical spine X-rays and sacral vertebra 1 ('S1') in lumbar spine X-rays. Those references are commonly relatively easy to identify at the edge of the spine. To assess the performance of our proposed pipeline, we conducted evaluations on three spinal X-ray datasets, including two in-house datasets and one publicly available dataset. The ground truth annotations were provided by radiologists for comparison. Our experimental results have shown that the proposed pipeline outperformed two state-of-the-art (SOTA) segmentation models on our test dataset with a mean Dice of 0.90, vs. a mean Dice of 0.73 for Mask R-CNN and 0.72 for U-Net. We also demonstrated that VertXNet is a modular pipeline that enables using other SOTA model, like nnU-Net to further improve its performance. Furthermore, to evaluate the generalization ability of VertXNet on spinal X-rays, we directly tested the pre-trained pipeline on two additional datasets. A consistently strong performance was observed, with mean Dice coefficients of 0.89 and 0.88, respectively. In summary, VertXNet demonstrated significantly improved performance in vertebral body segmentation and labeling for spinal X-ray imaging. Its robustness and generalization were presented through the evaluation of both in-house clinical trial data and publicly available datasets.
Collapse
Affiliation(s)
- Yao Chen
- Novartis Pharmaceuticals Corporation, East Hanover, NJ, USA
| | - Yuanhan Mo
- Big Data Institute, University of Oxford, Oxford, UK
| | - Aimee Readie
- Novartis Pharmaceuticals Corporation, East Hanover, NJ, USA
| | | | - Indrajeet Mandal
- John Radcliffe Hospital, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Faiz Jabbar
- John Radcliffe Hospital, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | | | | |
Collapse
|
9
|
Gómez Ó, Mesejo P, Ibáñez Ó, Valsecchi A, Bermejo E, Cerezo A, Pérez J, Alemán I, Kahana T, Damas S, Cordón Ó. Evaluating artificial intelligence for comparative radiography. Int J Legal Med 2024; 138:307-327. [PMID: 37801115 DOI: 10.1007/s00414-023-03080-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 08/23/2023] [Indexed: 10/07/2023]
Abstract
INTRODUCTION Comparative radiography is a forensic identification and shortlisting technique based on the comparison of skeletal structures in ante-mortem and post-mortem images. The images (e.g., 2D radiographs or 3D computed tomographies) are manually superimposed and visually compared by a forensic practitioner. It requires a significant amount of time per comparison, limiting its utility in large comparison scenarios. METHODS We propose and validate a novel framework for automating the shortlisting of candidates using artificial intelligence. It is composed of (1) a segmentation method to delimit skeletal structures' silhouettes in radiographs, (2) a superposition method to generate the best simulated "radiographs" from 3D images according to the segmented radiographs, and (3) a decision-making method for shortlisting all candidates ranked according to a similarity metric. MATERIAL The dataset is composed of 180 computed tomographies and 180 radiographs where the frontal sinuses are visible. Frontal sinuses are the skeletal structure analyzed due to their high individualization capability. RESULTS Firstly, we validate two deep learning-based techniques for segmenting the frontal sinuses in radiographs, obtaining high-quality results. Secondly, we study the framework's shortlisting capability using both automatic segmentations and superimpositions. The obtained superimpositions, based only on the superimposition metric, allowed us to filter out 40% of the possible candidates in a completely automatic manner. Thirdly, we perform a reliability study by comparing 180 radiographs against 180 computed tomographies using manual segmentations. The results allowed us to filter out 73% of the possible candidates. Furthermore, the results are robust to inter- and intra-expert-related errors.
Collapse
Affiliation(s)
- Óscar Gómez
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain.
| | - Pablo Mesejo
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain
- Panacea Cooperative Research S. Coop., Ponferrada, Spain
| | - Óscar Ibáñez
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain
- Panacea Cooperative Research S. Coop., Ponferrada, Spain
- Faculty of Computer Science, CITIC, University of A Coruña, A Coruña, Spain
| | - Andrea Valsecchi
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain
- Panacea Cooperative Research S. Coop., Ponferrada, Spain
| | - Enrique Bermejo
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain
- Panacea Cooperative Research S. Coop., Ponferrada, Spain
| | - Andrea Cerezo
- Department of Legal Medicine, Toxicology and Physical Anthropology, University of Granada, Granada, Spain
| | - José Pérez
- Department of Legal Medicine, Toxicology and Physical Anthropology, University of Granada, Granada, Spain
| | - Inmaculada Alemán
- Department of Legal Medicine, Toxicology and Physical Anthropology, University of Granada, Granada, Spain
| | - Tzipi Kahana
- Faculty of Criminology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Sergio Damas
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain
- Department of Software Engineering, University of Granada, Granada, Spain
| | - Óscar Cordón
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain
| |
Collapse
|
10
|
Bi L, Buehner U, Fu X, Williamson T, Choong P, Kim J. Hybrid CNN-transformer network for interactive learning of challenging musculoskeletal images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107875. [PMID: 37871450 DOI: 10.1016/j.cmpb.2023.107875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Revised: 10/16/2023] [Accepted: 10/17/2023] [Indexed: 10/25/2023]
Abstract
BACKGROUND AND OBJECTIVES Segmentation of regions of interest (ROIs) such as tumors and bones plays an essential role in the analysis of musculoskeletal (MSK) images. Segmentation results can help with orthopaedic surgeons in surgical outcomes assessment and patient's gait cycle simulation. Deep learning-based automatic segmentation methods, particularly those using fully convolutional networks (FCNs), are considered as the state-of-the-art. However, in scenarios where the training data is insufficient to account for all the variations in ROIs, these methods struggle to segment the challenging ROIs that with less common image characteristics. Such characteristics might include low contrast to the background, inhomogeneous textures, and fuzzy boundaries. METHODS we propose a hybrid convolutional neural network - transformer network (HCTN) for semi-automatic segmentation to overcome the limitations of segmenting challenging MSK images. Specifically, we propose to fuse user-inputs (manual, e.g., mouse clicks) with high-level semantic image features derived from the neural network (automatic) where the user-inputs are used in an interactive training for uncommon image characteristics. In addition, we propose to leverage the transformer network (TN) - a deep learning model designed for handling sequence data, in together with features derived from FCNs for segmentation; this addresses the limitation of FCNs that can only operate on small kernels, which tends to dismiss global context and only focus on local patterns. RESULTS We purposely selected three MSK imaging datasets covering a variety of structures to evaluate the generalizability of the proposed method. Our semi-automatic HCTN method achieved a dice coefficient score (DSC) of 88.46 ± 9.41 for segmenting the soft-tissue sarcoma tumors from magnetic resonance (MR) images, 73.32 ± 11.97 for segmenting the osteosarcoma tumors from MR images and 93.93 ± 1.84 for segmenting the clavicle bones from chest radiographs. When compared to the current state-of-the-art automatic segmentation method, our HCTN method is 11.7%, 19.11% and 7.36% higher in DSC on the three datasets, respectively. CONCLUSION Our experimental results demonstrate that HCTN achieved more generalizable results than the current methods, especially with challenging MSK studies.
Collapse
Affiliation(s)
- Lei Bi
- Institute of Translational Medicine, National Center for Translational Medicine, Shanghai Jiao Tong University, Shanghai, China; School of Computer Science, University of Sydney, NSW, Australia
| | | | - Xiaohang Fu
- School of Computer Science, University of Sydney, NSW, Australia
| | - Tom Williamson
- Stryker Corporation, Kalamazoo, Michigan, USA; Centre for Additive Manufacturing, School of Engineering, RMIT University, VIC, Australia
| | - Peter Choong
- Department of Surgery, University of Melbourne, VIC, Australia
| | - Jinman Kim
- School of Computer Science, University of Sydney, NSW, Australia.
| |
Collapse
|
11
|
Jaganathan Y, Sanober S, Aldossary SMA, Aldosari H. Validating Wound Severity Assessment via Region-Anchored Convolutional Neural Network Model for Mobile Image-Based Size and Tissue Classification. Diagnostics (Basel) 2023; 13:2866. [PMID: 37761233 PMCID: PMC10529166 DOI: 10.3390/diagnostics13182866] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 08/16/2023] [Accepted: 08/23/2023] [Indexed: 09/29/2023] Open
Abstract
Evaluating and tracking the size of a wound is a crucial step in wound assessment. The measurement of various indicators on wounds over time plays a vital role in treating and managing crucial wounds. This article introduces the concept of utilizing mobile device-captured photographs to address this challenge. The research explores the application of digital technologies in the treatment of chronic wounds, offering tools to assist healthcare professionals in enhancing patient care and decision-making. Additionally, it investigates the use of deep learning (DL) algorithms along with the use of computer vision techniques to enhance the validation results of wounds. The proposed method involves tissue classification as well as visual recognition system. The wound's region of interest (RoI) is determined using superpixel techniques, enabling the calculation of its wounded zone. A classification model based on the Region Anchored CNN framework is employed to detect and differentiate wounds and classify their tissues. The outcome demonstrates that the suggested method of DL, with visual methodologies to detect the shape of a wound and measure its size, achieves exceptional results. By utilizing Resnet50, an accuracy of 0.85 percent is obtained, while the Tissue Classification CNN exhibits a Median Deviation Error of 2.91 and a precision range of 0.96%. These outcomes highlight the effectiveness of the methodology in real-world scenarios and its potential to enhance therapeutic treatments for patients with chronic wounds.
Collapse
Affiliation(s)
- Yogapriya Jaganathan
- Department of Computer Science and Engineering, Kongunadu College of Engineering and Technology, Trichy 621215, India
| | - Sumaya Sanober
- Department of Computer Science, Prince Sattam Bin Abdulaziz University, Wadi al dwassir 1190, Saudi Arabia;
| | - Sultan Mesfer A Aldossary
- Department of Computer Sciences, College of Arts and Sciences, Prince Sattam Bin Abdulaziz University, Wadi al dwassir 1190, Saudi Arabia;
| | - Huda Aldosari
- Department of Computer Science, Prince Sattam Bin Abdulaziz University, Wadi al dwassir 1190, Saudi Arabia;
| |
Collapse
|
12
|
Gopatoti A, Vijayalakshmi P. MTMC-AUR2CNet: Multi-textural multi-class attention recurrent residual convolutional neural network for COVID-19 classification using chest X-ray images. Biomed Signal Process Control 2023; 85:104857. [PMID: 36968651 PMCID: PMC10027978 DOI: 10.1016/j.bspc.2023.104857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 02/13/2023] [Accepted: 03/11/2023] [Indexed: 03/24/2023]
Abstract
Coronavirus disease (COVID-19) has infected over 603 million confirmed cases as of September 2022, and its rapid spread has raised concerns worldwide. More than 6.4 million fatalities in confirmed patients have been reported. According to reports, the COVID-19 virus causes lung damage and rapidly mutates before the patient receives any diagnosis-specific medicine. Daily increasing COVID-19 cases and the limited number of diagnosis tool kits encourage the use of deep learning (DL) models to assist health care practitioners using chest X-ray (CXR) images. The CXR is a low radiation radiography tool available in hospitals to diagnose COVID-19 and combat this spread. We propose a Multi-Textural Multi-Class (MTMC) UNet-based Recurrent Residual Convolutional Neural Network (MTMC-UR2CNet) and MTMC-UR2CNet with attention mechanism (MTMC-AUR2CNet) for multi-class lung lobe segmentation of CXR images. The lung lobe segmentation output of MTMC-UR2CNet and MTMC-AUR2CNet are mapped individually with their input CXRs to generate the region of interest (ROI). The multi-textural features are extracted from the ROI of each proposed MTMC network. The extracted multi-textural features from ROI are fused and are trained to the Whale optimization algorithm (WOA) based DeepCNN classifier on classifying the CXR images into normal (healthy), COVID-19, viral pneumonia, and lung opacity. The experimental result shows that the MTMC-AUR2CNet has superior performance in multi-class lung lobe segmentation of CXR images with an accuracy of 99.47%, followed by MTMC-UR2CNet with an accuracy of 98.39%. Also, MTMC-AUR2CNet improves the multi-textural multi-class classification accuracy of the WOA-based DeepCNN classifier to 97.60% compared to MTMC-UR2CNet.
Collapse
Affiliation(s)
- Anandbabu Gopatoti
- Department of Electronics and Communication Engineering, Hindusthan College of Engineering and Technology, Coimbatore, Tamil Nadu, India
- Centre for Research, Anna University, Chennai, Tamil Nadu, India
| | - P Vijayalakshmi
- Department of Electronics and Communication Engineering, Hindusthan College of Engineering and Technology, Coimbatore, Tamil Nadu, India
| |
Collapse
|
13
|
Horry MJ, Chakraborty S, Pradhan B, Paul M, Zhu J, Loh HW, Barua PD, Acharya UR. Development of Debiasing Technique for Lung Nodule Chest X-ray Datasets to Generalize Deep Learning Models. SENSORS (BASEL, SWITZERLAND) 2023; 23:6585. [PMID: 37514877 PMCID: PMC10385599 DOI: 10.3390/s23146585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 07/16/2023] [Accepted: 07/20/2023] [Indexed: 07/30/2023]
Abstract
Screening programs for early lung cancer diagnosis are uncommon, primarily due to the challenge of reaching at-risk patients located in rural areas far from medical facilities. To overcome this obstacle, a comprehensive approach is needed that combines mobility, low cost, speed, accuracy, and privacy. One potential solution lies in combining the chest X-ray imaging mode with federated deep learning, ensuring that no single data source can bias the model adversely. This study presents a pre-processing pipeline designed to debias chest X-ray images, thereby enhancing internal classification and external generalization. The pipeline employs a pruning mechanism to train a deep learning model for nodule detection, utilizing the most informative images from a publicly available lung nodule X-ray dataset. Histogram equalization is used to remove systematic differences in image brightness and contrast. Model training is then performed using combinations of lung field segmentation, close cropping, and rib/bone suppression. The resulting deep learning models, generated through this pre-processing pipeline, demonstrate successful generalization on an independent lung nodule dataset. By eliminating confounding variables in chest X-ray images and suppressing signal noise from the bone structures, the proposed deep learning lung nodule detection algorithm achieves an external generalization accuracy of 89%. This approach paves the way for the development of a low-cost and accessible deep learning-based clinical system for lung cancer screening.
Collapse
Affiliation(s)
- Michael J Horry
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia
- IBM Australia Limited, Sydney, NSW 2000, Australia
| | - Subrata Chakraborty
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia
- Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
| | - Biswajeet Pradhan
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia
- Earth Observation Center, Institute of Climate Change, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia
| | - Manoranjan Paul
- Machine Vision and Digital Health (MaViDH), School of Computing and Mathematics, Charles Sturt University, Bathurst, NSW 2795, Australia
| | - Jing Zhu
- Department of Radiology, Westmead Hospital, Westmead, NSW 2145, Australia
| | - Hui Wen Loh
- School of Science and Technology, Singapore University of Social Sciences, Singapore 599494, Singapore
| | - Prabal Datta Barua
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia
- Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
- Cogninet Brain Team, Cogninet Australia, Sydney, NSW 2010, Australia
- School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Toowoomba, QLD 4350, Australia
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, QLD 4300, Australia
| |
Collapse
|
14
|
Busch F, Xu L, Sushko D, Weidlich M, Truhn D, Müller-Franzes G, Heimer MM, Niehues SM, Makowski MR, Hinsche M, Vahldiek JL, Aerts HJ, Adams LC, Bressem KK. Dual center validation of deep learning for automated multi-label segmentation of thoracic anatomy in bedside chest radiographs. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 234:107505. [PMID: 37003043 DOI: 10.1016/j.cmpb.2023.107505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 02/17/2023] [Accepted: 03/21/2023] [Indexed: 06/19/2023]
Abstract
BACKGROUND AND OBJECTIVES Bedside chest radiographs (CXRs) are challenging to interpret but important for monitoring cardiothoracic disease and invasive therapy devices in critical care and emergency medicine. Taking surrounding anatomy into account is likely to improve the diagnostic accuracy of artificial intelligence and bring its performance closer to that of a radiologist. Therefore, we aimed to develop a deep convolutional neural network for efficient automatic anatomy segmentation of bedside CXRs. METHODS To improve the efficiency of the segmentation process, we introduced a "human-in-the-loop" segmentation workflow with an active learning approach, looking at five major anatomical structures in the chest (heart, lungs, mediastinum, trachea, and clavicles). This allowed us to decrease the time needed for segmentation by 32% and select the most complex cases to utilize human expert annotators efficiently. After annotation of 2,000 CXRs from different Level 1 medical centers at Charité - University Hospital Berlin, there was no relevant improvement in model performance, and the annotation process was stopped. A 5-layer U-ResNet was trained for 150 epochs using a combined soft Dice similarity coefficient (DSC) and cross-entropy as a loss function. DSC, Jaccard index (JI), Hausdorff distance (HD) in mm, and average symmetric surface distance (ASSD) in mm were used to assess model performance. External validation was performed using an independent external test dataset from Aachen University Hospital (n = 20). RESULTS The final training, validation, and testing dataset consisted of 1900/50/50 segmentation masks for each anatomical structure. Our model achieved a mean DSC/JI/HD/ASSD of 0.93/0.88/32.1/5.8 for the lung, 0.92/0.86/21.65/4.85 for the mediastinum, 0.91/0.84/11.83/1.35 for the clavicles, 0.9/0.85/9.6/2.19 for the trachea, and 0.88/0.8/31.74/8.73 for the heart. Validation using the external dataset showed an overall robust performance of our algorithm. CONCLUSIONS Using an efficient computer-aided segmentation method with active learning, our anatomy-based model achieves comparable performance to state-of-the-art approaches. Instead of only segmenting the non-overlapping portions of the organs, as previous studies did, a closer approximation to actual anatomy is achieved by segmenting along the natural anatomical borders. This novel anatomy approach could be useful for developing pathology models for accurate and quantifiable diagnosis.
Collapse
Affiliation(s)
- Felix Busch
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany; Department of Anesthesiology, Division of Operative Intensive Care Medicine, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany.
| | - Lina Xu
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Dmitry Sushko
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Matthias Weidlich
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Gustav Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Maurice M Heimer
- Department of Radiology, Ludwig-Maximilians-University of Munich, Munich, Germany
| | - Stefan M Niehues
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Marcus R Makowski
- Department of Radiology, Technical University of Munich, Munich, Germany
| | - Markus Hinsche
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Janis L Vahldiek
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Hugo Jwl Aerts
- Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany; Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA; Departments of Radiation Oncology and Radiology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA, USA; Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands
| | - Lisa C Adams
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Keno K Bressem
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany; Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
15
|
Ghali R, Akhloufi MA. Vision Transformers for Lung Segmentation on CXR Images. SN COMPUTER SCIENCE 2023; 4:414. [PMID: 37252339 PMCID: PMC10206550 DOI: 10.1007/s42979-023-01848-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 04/17/2023] [Indexed: 05/31/2023]
Abstract
Accurate segmentation of the lungs in CXR images is the basis for an automated CXR image analysis system. It helps radiologists in detecting lung areas, subtle signs of disease and improving the diagnosis process for patients. However, precise semantic segmentation of lungs is considered a challenging case due to the presence of the edge rib cage, wide variation of lung shape, and lungs affected by diseases. In this paper, we address the problem of lung segmentation in healthy and unhealthy CXR images. Five models were developed and used in detecting and segmenting lung regions. Two loss functions and three benchmark datasets were employed to evaluate these models. Experimental results showed that the proposed models were able to extract salient global and local features from the input CXR images. The best performing model achieved an F1 score of 97.47%, outperforming recent published models. They proved their ability to separate lung regions from the rib cage and clavicle edges and segment varying lung shape depending on age and gender, as well as challenging cases of lungs affected by anomalies such as tuberculosis and the presence of nodules.
Collapse
Affiliation(s)
- Rafik Ghali
- Perception, Robotics, and Intelligent Machines (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1A 3E9 Canada
| | - Moulay A. Akhloufi
- Perception, Robotics, and Intelligent Machines (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1A 3E9 Canada
| |
Collapse
|
16
|
Agrawal T, Choudhary P. ReSE‐Net: Enhanced UNet architecture for lung segmentation in chest radiography images. Comput Intell 2023. [DOI: 10.1111/coin.12575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023]
Affiliation(s)
- Tarun Agrawal
- Department of Computer Science and Engineering NIT Hamirpur Hamirpur Himachal Pradesh India
| | - Prakash Choudhary
- Department of Computer Science and Engineering Central University of Rajasthan Ajmer Rajasthan India
| |
Collapse
|
17
|
Olory Agomma R, Cresson T, de Guise J, Vazquez C. Automatic lower limb bone segmentation in radiographs with different orientations and fields of view based on a contextual network. Int J Comput Assist Radiol Surg 2023; 18:641-651. [PMID: 36463545 DOI: 10.1007/s11548-022-02798-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 11/16/2022] [Indexed: 12/05/2022]
Abstract
PURPOSE Bone identification and segmentation in X-ray images are crucial in orthopedics for the automation of clinical procedures, but it often involves some manual operations. In this work, using a modified SegNet neural network, we automatically identify and segment lower limb bone structures on radiographs presenting various fields of view and different patient orientations. METHODS A wide contextual neural network architecture is proposed to perform a high-quality pixel-wise semantic segmentation on X-ray images presenting structures with a similar appearance and strong superposition. The proposed architecture is based on the premise that every output pixel on the label map has a wide receptive field. This allows the network to capture both global and local contextual information. The overlapping between structures is handled with additional labels. RESULTS The proposed approach was evaluated on a test dataset composed of 70 radiographs with entire and partial bones. We obtained an average detection rate of 98.00% and an average Dice coefficient of 95.25 ± 9.02% across all classes. For the challenging subset of images with high superposition, we obtained an average detection rate of 96.36% and an average Dice coefficient of 93.81 ± 10.03% across all classes. CONCLUSION The results show the effectiveness of the proposed approach in segmenting and identifying lower limb bone structures and overlapping structures in radiographs with strong bone superposition and highly variable configurations, as well as in radiographs containing only small pieces of bone structures.
Collapse
Affiliation(s)
- Roseline Olory Agomma
- Laboratoire de recherche en imagerie et orthopédie, 900 Saint-Denis Street, Montreal, QC, Canada.
- École de technologie supérieure, 1100 Notre-Dame St. West, Montreal, QC, Canada.
- Centre de recherche du CHUM, 900 Saint-Denis Street, Montreal, QC, Canada.
| | - Thierry Cresson
- Laboratoire de recherche en imagerie et orthopédie, 900 Saint-Denis Street, Montreal, QC, Canada
- École de technologie supérieure, 1100 Notre-Dame St. West, Montreal, QC, Canada
- Centre de recherche du CHUM, 900 Saint-Denis Street, Montreal, QC, Canada
| | - Jacques de Guise
- Laboratoire de recherche en imagerie et orthopédie, 900 Saint-Denis Street, Montreal, QC, Canada
- École de technologie supérieure, 1100 Notre-Dame St. West, Montreal, QC, Canada
- Centre de recherche du CHUM, 900 Saint-Denis Street, Montreal, QC, Canada
| | - Carlos Vazquez
- Laboratoire de recherche en imagerie et orthopédie, 900 Saint-Denis Street, Montreal, QC, Canada
- École de technologie supérieure, 1100 Notre-Dame St. West, Montreal, QC, Canada
| |
Collapse
|
18
|
Ullah I, Ali F, Shah B, El-Sappagh S, Abuhmed T, Park SH. A deep learning based dual encoder-decoder framework for anatomical structure segmentation in chest X-ray images. Sci Rep 2023; 13:791. [PMID: 36646735 PMCID: PMC9842654 DOI: 10.1038/s41598-023-27815-w] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 01/09/2023] [Indexed: 01/18/2023] Open
Abstract
Automated multi-organ segmentation plays an essential part in the computer-aided diagnostic (CAD) of chest X-ray fluoroscopy. However, developing a CAD system for the anatomical structure segmentation remains challenging due to several indistinct structures, variations in the anatomical structure shape among different individuals, the presence of medical tools, such as pacemakers and catheters, and various artifacts in the chest radiographic images. In this paper, we propose a robust deep learning segmentation framework for the anatomical structure in chest radiographs that utilizes a dual encoder-decoder convolutional neural network (CNN). The first network in the dual encoder-decoder structure effectively utilizes a pre-trained VGG19 as an encoder for the segmentation task. The pre-trained encoder output is fed into the squeeze-and-excitation (SE) to boost the network's representation power, which enables it to perform dynamic channel-wise feature calibrations. The calibrated features are efficiently passed into the first decoder to generate the mask. We integrated the generated mask with the input image and passed it through a second encoder-decoder network with the recurrent residual blocks and an attention the gate module to capture the additional contextual features and improve the segmentation of the smaller regions. Three public chest X-ray datasets are used to evaluate the proposed method for multi-organs segmentation, such as the heart, lungs, and clavicles, and single-organ segmentation, which include only lungs. The results from the experiment show that our proposed technique outperformed the existing multi-class and single-class segmentation methods.
Collapse
Affiliation(s)
- Ihsan Ullah
- Department of Robotics and Mechatronics Engineering, Daegu Gyeonbuk Institute of Science and Engineering (DGIST), Daegu, 42988, South Korea
| | - Farman Ali
- Department of Computer Science and Engineering, School of Convergence, College of Computing and Informatics, Sungkyunkwan University, Seoul, 03063, South Korea
| | - Babar Shah
- College of Technological Innovation, Zayed University, Dubai, 19282, United Arab Emirates
| | - Shaker El-Sappagh
- Faculty of Computer Science and Engineering, Galala University, Suez, 435611, Egypt
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Banha, 13518, Egypt
| | - Tamer Abuhmed
- Department of Computer Science and Engineering, College of Computing and Informatics, Sungkyunkwan University, Suwon, 16419, South Korea
| | - Sang Hyun Park
- Department of Robotics and Mechatronics Engineering, Daegu Gyeonbuk Institute of Science and Engineering (DGIST), Daegu, 42988, South Korea.
| |
Collapse
|
19
|
Raj R, Londhe ND, Sonawane R. PsLSNetV2: End to end deep learning system for measurement of area score of psoriasis regions in color images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
20
|
A Patient-Specific Algorithm for Lung Segmentation in Chest Radiographs. AI 2022. [DOI: 10.3390/ai3040055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Lung segmentation plays an important role in computer-aided detection and diagnosis using chest radiographs (CRs). Currently, the U-Net and DeepLabv3+ convolutional neural network architectures are widely used to perform CR lung segmentation. To boost performance, ensemble methods are often used, whereby probability map outputs from several networks operating on the same input image are averaged. However, not all networks perform adequately for any specific patient image, even if the average network performance is good. To address this, we present a novel multi-network ensemble method that employs a selector network. The selector network evaluates the segmentation outputs from several networks; on a case-by-case basis, it selects which outputs are fused to form the final segmentation for that patient. Our candidate lung segmentation networks include U-Net, with five different encoder depths, and DeepLabv3+, with two different backbone networks (ResNet50 and ResNet18). Our selector network is a ResNet18 image classifier. We perform all training using the publicly available Shenzhen CR dataset. Performance testing is carried out with two independent publicly available CR datasets, namely, Montgomery County (MC) and Japanese Society of Radiological Technology (JSRT). Intersection-over-Union scores for the proposed approach are 13% higher than the standard averaging ensemble method on MC and 5% better on JSRT.
Collapse
|
21
|
Genc A, Kovarik L, Fraser HL. A deep learning approach for semantic segmentation of unbalanced data in electron tomography of catalytic materials. Sci Rep 2022; 12:16267. [PMID: 36171204 PMCID: PMC9519981 DOI: 10.1038/s41598-022-16429-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 07/11/2022] [Indexed: 11/09/2022] Open
Abstract
In computed TEM tomography, image segmentation represents one of the most basic tasks with implications not only for 3D volume visualization, but more importantly for quantitative 3D analysis. In case of large and complex 3D data sets, segmentation can be an extremely difficult and laborious task, and thus has been one of the biggest hurdles for comprehensive 3D analysis. Heterogeneous catalysts have complex surface and bulk structures, and often sparse distribution of catalytic particles with relatively poor intrinsic contrast, which possess a unique challenge for image segmentation, including the current state-of-the-art deep learning methods. To tackle this problem, we apply a deep learning-based approach for the multi-class semantic segmentation of a γ-Alumina/Pt catalytic material in a class imbalance situation. Specifically, we used the weighted focal loss as a loss function and attached it to the U-Net's fully convolutional network architecture. We assessed the accuracy of our results using Dice similarity coefficient (DSC), recall, precision, and Hausdorff distance (HD) metrics on the overlap between the ground-truth and predicted segmentations. Our adopted U-Net model with the weighted focal loss function achieved an average DSC score of 0.96 ± 0.003 in the γ-Alumina support material and 0.84 ± 0.03 in the Pt NPs segmentation tasks. We report an average boundary-overlap error of less than 2 nm at the 90th percentile of HD for γ-Alumina and Pt NPs segmentations. The complex surface morphology of γ-Alumina and its relation to the Pt NPs were visualized in 3D by the deep learning-assisted automatic segmentation of a large data set of high-angle annular dark-field (HAADF) scanning transmission electron microscopy (STEM) tomography reconstructions.
Collapse
Affiliation(s)
- Arda Genc
- Center for the Accelerated Maturation of Materials, Department of Materials Science and Engineering, The Ohio State University, Columbus, OH, USA
- Materials Department, University of California Santa Barbara, Santa Barbara, CA, USA
| | - Libor Kovarik
- Institute for Integrated Catalysis, Pacific Northwest National Laboratory, Richland, WA, USA.
| | - Hamish L Fraser
- Center for the Accelerated Maturation of Materials, Department of Materials Science and Engineering, The Ohio State University, Columbus, OH, USA
| |
Collapse
|
22
|
Yang L, Gu Y, Huo B, Liu Y, Bian G. A shape-guided deep residual network for automated CT lung segmentation. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
23
|
Quantitative Measurement of Pneumothorax Using Artificial Intelligence Management Model and Clinical Application. Diagnostics (Basel) 2022; 12:diagnostics12081823. [PMID: 36010174 PMCID: PMC9406694 DOI: 10.3390/diagnostics12081823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/16/2022] [Accepted: 07/26/2022] [Indexed: 11/23/2022] Open
Abstract
Artificial intelligence (AI) techniques can be a solution for delayed or misdiagnosed pneumothorax. This study developed, a deep-learning-based AI model to estimate the pneumothorax amount on a chest radiograph and applied it to a treatment algorithm developed by experienced thoracic surgeons. U-net performed semantic segmentation and classification of pneumothorax and non-pneumothorax areas. The pneumothorax amount was measured using chest computed tomography (volume ratio, gold standard) and chest radiographs (area ratio, true label) and calculated using the AI model (area ratio, predicted label). Each value was compared and analyzed based on clinical outcomes. The study included 96 patients, of which 67 comprised the training set and the others the test set. The AI model showed an accuracy of 97.8%, sensitivity of 69.2%, a negative predictive value of 99.1%, and a dice similarity coefficient of 61.8%. In the test set, the average amount of pneumothorax was 15%, 16%, and 13% in the gold standard, predicted, and true labels, respectively. The predicted label was not significantly different from the gold standard (p = 0.11) but inferior to the true label (difference in MAE: 3.03%). The amount of pneumothorax in thoracostomy patients was 21.6% in predicted cases and 18.5% in true cases.
Collapse
|
24
|
Lung Field Segmentation in Chest X-ray Images Using Superpixel Resizing and Encoder–Decoder Segmentation Networks. Bioengineering (Basel) 2022; 9:bioengineering9080351. [PMID: 36004876 PMCID: PMC9404743 DOI: 10.3390/bioengineering9080351] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/24/2022] [Accepted: 07/26/2022] [Indexed: 11/25/2022] Open
Abstract
Lung segmentation of chest X-ray (CXR) images is a fundamental step in many diagnostic applications. Most lung field segmentation methods reduce the image size to speed up the subsequent processing time. Then, the low-resolution result is upsampled to the original high-resolution image. Nevertheless, the image boundaries become blurred after the downsampling and upsampling steps. It is necessary to alleviate blurred boundaries during downsampling and upsampling. In this paper, we incorporate the lung field segmentation with the superpixel resizing framework to achieve the goal. The superpixel resizing framework upsamples the segmentation results based on the superpixel boundary information obtained from the downsampling process. Using this method, not only can the computation time of high-resolution medical image segmentation be reduced, but also the quality of the segmentation results can be preserved. We evaluate the proposed method on JSRT, LIDC-IDRI, and ANH datasets. The experimental results show that the proposed superpixel resizing framework outperforms other traditional image resizing methods. Furthermore, combining the segmentation network and the superpixel resizing framework, the proposed method achieves better results with an average time score of 4.6 s on CPU and 0.02 s on GPU.
Collapse
|
25
|
Pasupathy V, Khilar R. Advancements in deep structured learning based medical image interpretation. JOURNAL OF INFORMATION & OPTIMIZATION SCIENCES 2022. [DOI: 10.1080/02522667.2022.2094550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/13/2023]
Affiliation(s)
- Vijayalakshmi Pasupathy
- Department of Computer Science and Engineering, Panimalar Engineering College, Chennai, Tamil Nadu, India
| | - Rashmita Khilar
- Department of Information Technology, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, Tamil Nadu, India
| |
Collapse
|
26
|
Hu J, Zhang C, Zhou K, Gao S. Chest X-Ray Diagnostic Quality Assessment: How Much Is Pixel-Wise Supervision Needed? IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1711-1723. [PMID: 35120002 DOI: 10.1109/tmi.2022.3149171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Chest X-ray is an important imaging method for the diagnosis of chest diseases. Chest radiograph diagnostic quality assessment is vital for the diagnosis of the disease because unqualified radiographs have negative impacts on doctors' diagnosis and thus increase the burden on patients due to the re-acquirement of the radiographs. So far no algorithms and public data sets have been developed for chest radiograph diagnostic quality assessment. Towards effective chest X-ray diagnostic quality assessment, we analyze the image characteristics of four main chest radiograph diagnostic quality issues, i.e. Scapula Overlapping Lung, Artifact, Lung Field Loss, and Clavicle Unflatness. Our experiments show that general image classification methods are not competent for the task because the detailed information used for quality assessment by radiologists cannot be fully exploited by deep CNNs and image-level annotations. Then we propose to leverage a multi-label semantic segmentation framework to find the problematic regions, and then classify the quality issues based on the results of segmentation. However, subsequent classification is often negatively affected by certain small segmentation errors. Therefore, we propose to estimate a distance map that measures the distance from a pixel to its nearest segment, and use it to force the prediction of semantic segmentation more holistic and suitable for classification. Extensive experiments validate the effectiveness of our semantic-segmentation-based solution for chest X-ray diagnostic quality assessment. However, general segmentation-based algorithms requires fine pixel-wise annotations in the era of deep learning. In order to reduce reliance on fine annotations and further validate how important pixel-wise annotations are, weak supervision for segmentation is applied, and demonstrates its ability close to that of full supervision. Finally, we present ChestX-rayQuality, a chest radiograph data set, which comprises 480 frontal-view chest radiographs with semantic segmentation annotations and four labels of quality issue. Also, other 1212 chest radiographs with limited annotations are imported to validate our algorithms and arguments on larger data set. These two data set will be made publicly available.
Collapse
|
27
|
Shanker RRBJ, Zhang MH, Ginat DT. Semantic Segmentation of Extraocular Muscles on Computed Tomography Images Using Convolutional Neural Networks. Diagnostics (Basel) 2022; 12:1553. [PMID: 35885459 PMCID: PMC9325103 DOI: 10.3390/diagnostics12071553] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 11/22/2022] Open
Abstract
Computed tomography (CT) imaging of the orbit with measurement of extraocular muscle size can be useful for diagnosing and monitoring conditions that affect extraocular muscles. However, the manual measurement of extraocular muscle size can be time-consuming and tedious. The purpose of this study is to evaluate the effectiveness of deep learning algorithms in segmenting extraocular muscles and measuring muscle sizes from CT images. Consecutive CT scans of orbits from 210 patients between 1 January 2010 and 31 December 2019 were used. Extraocular muscles were manually annotated in the studies, which were then used to train the deep learning algorithms. The proposed U-net algorithm can segment extraocular muscles on coronal slices of 32 test samples with an average dice score of 0.92. The thickness and area measurements from predicted segmentations had a mean absolute error (MAE) of 0.35 mm and 3.87 mm2, respectively, with a corresponding mean absolute percentage error (MAPE) of 7 and 9%, respectively. On qualitative analysis of 32 test samples, 30 predicted segmentations from the U-net algorithm were accepted while 2 were rejected. Based on the results from quantitative and qualitative evaluation, this study demonstrates that CNN-based deep learning algorithms are effective at segmenting extraocular muscles and measuring muscles sizes.
Collapse
Affiliation(s)
| | - Michael H. Zhang
- Department of Radiology, University of Chicago, Chicago, IL 60615, USA; (R.R.B.J.S.); (M.H.Z.)
| | - Daniel T. Ginat
- Department of Radiology, Section of Neuroradiology, University of Chicago, Chicago, IL 60615, USA
| |
Collapse
|
28
|
Jafar A, Hameed MT, Akram N, Waqas U, Kim HS, Naqvi RA. CardioNet: Automatic Semantic Segmentation to Calculate the Cardiothoracic Ratio for Cardiomegaly and Other Chest Diseases. J Pers Med 2022; 12:988. [PMID: 35743771 PMCID: PMC9225197 DOI: 10.3390/jpm12060988] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 06/02/2022] [Accepted: 06/13/2022] [Indexed: 11/18/2022] Open
Abstract
Semantic segmentation for diagnosing chest-related diseases like cardiomegaly, emphysema, pleural effusions, and pneumothorax is a critical yet understudied tool for identifying the chest anatomy. A dangerous disease among these is cardiomegaly, in which sudden death is a high risk. An expert medical practitioner can diagnose cardiomegaly early using a chest radiograph (CXR). Cardiomegaly is a heart enlargement disease that can be analyzed by calculating the transverse cardiac diameter (TCD) and the cardiothoracic ratio (CTR). However, the manual estimation of CTR and other chest-related diseases requires much time from medical experts. Based on their anatomical semantics, artificial intelligence estimates cardiomegaly and related diseases by segmenting CXRs. Unfortunately, due to poor-quality images and variations in intensity, the automatic segmentation of the lungs and heart with CXRs is challenging. Deep learning-based methods are being used to identify the chest anatomy segmentation, but most of them only consider the lung segmentation, requiring a great deal of training. This work is based on a multiclass concatenation-based automatic semantic segmentation network, CardioNet, that was explicitly designed to perform fine segmentation using fewer parameters than a conventional deep learning scheme. Furthermore, the semantic segmentation of other chest-related diseases is diagnosed using CardioNet. CardioNet is evaluated using the JSRT dataset (Japanese Society of Radiological Technology). The JSRT dataset is publicly available and contains multiclass segmentation of the heart, lungs, and clavicle bones. In addition, our study examined lung segmentation using another publicly available dataset, Montgomery County (MC). The experimental results of the proposed CardioNet model achieved acceptable accuracy and competitive results across all datasets.
Collapse
Affiliation(s)
- Abbas Jafar
- Department of Computer Engineering, Myongji University, Yongin 03674, Korea;
| | - Muhammad Talha Hameed
- Department of Primary and Secondary Healthcare, Lahore 54000, Pakistan; (M.T.H.); (N.A.)
| | - Nadeem Akram
- Department of Primary and Secondary Healthcare, Lahore 54000, Pakistan; (M.T.H.); (N.A.)
| | - Umer Waqas
- Research and Development, AItheNutrigene, Seoul 06132, Korea;
| | - Hyung Seok Kim
- School of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Korea
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Korea
| |
Collapse
|
29
|
Ahsan MM, Siddique Z. Machine learning-based heart disease diagnosis: A systematic literature review. Artif Intell Med 2022; 128:102289. [DOI: 10.1016/j.artmed.2022.102289] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 03/22/2022] [Indexed: 01/01/2023]
|
30
|
Astley JR, Wild JM, Tahir BA. Deep learning in structural and functional lung image analysis. Br J Radiol 2022; 95:20201107. [PMID: 33877878 PMCID: PMC9153705 DOI: 10.1259/bjr.20201107] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
The recent resurgence of deep learning (DL) has dramatically influenced the medical imaging field. Medical image analysis applications have been at the forefront of DL research efforts applied to multiple diseases and organs, including those of the lungs. The aims of this review are twofold: (i) to briefly overview DL theory as it relates to lung image analysis; (ii) to systematically review the DL research literature relating to the lung image analysis applications of segmentation, reconstruction, registration and synthesis. The review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. 479 studies were initially identified from the literature search with 82 studies meeting the eligibility criteria. Segmentation was the most common lung image analysis DL application (65.9% of papers reviewed). DL has shown impressive results when applied to segmentation of the whole lung and other pulmonary structures. DL has also shown great potential for applications in image registration, reconstruction and synthesis. However, the majority of published studies have been limited to structural lung imaging with only 12.9% of reviewed studies employing functional lung imaging modalities, thus highlighting significant opportunities for further research in this field. Although the field of DL in lung image analysis is rapidly expanding, concerns over inconsistent validation and evaluation strategies, intersite generalisability, transparency of methodological detail and interpretability need to be addressed before widespread adoption in clinical lung imaging workflow.
Collapse
Affiliation(s)
| | - Jim M Wild
- Department of Oncology and Metabolism, The University of Sheffield, Sheffield, United Kingdom
| | | |
Collapse
|
31
|
Carrión H, Jafari M, Bagood MD, Yang HY, Isseroff RR, Gomez M. Automatic wound detection and size estimation using deep learning algorithms. PLoS Comput Biol 2022; 18:e1009852. [PMID: 35275923 PMCID: PMC8942216 DOI: 10.1371/journal.pcbi.1009852] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 03/23/2022] [Accepted: 01/20/2022] [Indexed: 11/17/2022] Open
Abstract
Evaluating and tracking wound size is a fundamental metric for the wound assessment process. Good location and size estimates can enable proper diagnosis and effective treatment. Traditionally, laboratory wound healing studies include a collection of images at uniform time intervals exhibiting the wounded area and the healing process in the test animal, often a mouse. These images are then manually observed to determine key metrics -such as wound size progress- relevant to the study. However, this task is a time-consuming and laborious process. In addition, defining the wound edge could be subjective and can vary from one individual to another even among experts. Furthermore, as our understanding of the healing process grows, so does our need to efficiently and accurately track these key factors for high throughput (e.g., over large-scale and long-term experiments). Thus, in this study, we develop a deep learning-based image analysis pipeline that aims to intake non-uniform wound images and extract relevant information such as the location of interest, wound only image crops, and wound periphery size over-time metrics. In particular, our work focuses on images of wounded laboratory mice that are used widely for translationally relevant wound studies and leverages a commonly used ring-shaped splint present in most images to predict wound size. We apply the method to a dataset that was never meant to be quantified and, thus, presents many visual challenges. Additionally, the data set was not meant for training deep learning models and so is relatively small in size with only 256 images. We compare results to that of expert measurements and demonstrate preservation of information relevant to predicting wound closure despite variability from machine-to-expert and even expert-to-expert. The proposed system resulted in high fidelity results on unseen data with minimal human intervention. Furthermore, the pipeline estimates acceptable wound sizes when less than 50% of the images are missing reference objects.
Collapse
Affiliation(s)
- Héctor Carrión
- Department of Computer Science and Engineering, University of California, Santa Cruz, California, United States of America
| | - Mohammad Jafari
- Department of Earth and Space Sciences, Columbus State University, Columbus, Georgia, United States of America
| | - Michelle Dawn Bagood
- Department of Dermatology, University of California, Davis, Sacramento, California, United States of America
| | - Hsin-ya Yang
- Department of Dermatology, University of California, Davis, Sacramento, California, United States of America
| | - Roslyn Rivkah Isseroff
- Department of Dermatology, University of California, Davis, Sacramento, California, United States of America
| | - Marcella Gomez
- Department of Applied Mathematics, University of California, Santa Cruz, California, United States of America
| |
Collapse
|
32
|
Maity A, Nair TR, Mehta S, Prakasam P. Automatic lung parenchyma segmentation using a deep convolutional neural network from chest X-rays. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103398] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
33
|
Wang H, Gu H, Qin P, Wang J. U-shaped GAN for Semi-Supervised Learning and Unsupervised Domain Adaptation in High Resolution Chest Radiograph Segmentation. Front Med (Lausanne) 2022; 8:782664. [PMID: 35096877 PMCID: PMC8792862 DOI: 10.3389/fmed.2021.782664] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 12/14/2021] [Indexed: 01/03/2023] Open
Abstract
Deep learning has achieved considerable success in medical image segmentation. However, applying deep learning in clinical environments often involves two problems: (1) scarcity of annotated data as data annotation is time-consuming and (2) varying attributes of different datasets due to domain shift. To address these problems, we propose an improved generative adversarial network (GAN) segmentation model, called U-shaped GAN, for limited-annotated chest radiograph datasets. The semi-supervised learning approach and unsupervised domain adaptation (UDA) approach are modeled into a unified framework for effective segmentation. We improve GAN by replacing the traditional discriminator with a U-shaped net, which predicts each pixel a label. The proposed U-shaped net is designed with high resolution radiographs (1,024 × 1,024) for effective segmentation while taking computational burden into account. The pointwise convolution is applied to U-shaped GAN for dimensionality reduction, which decreases the number of feature maps while retaining their salient features. Moreover, we design the U-shaped net with a pretrained ResNet-50 as an encoder to reduce the computational burden of training the encoder from scratch. A semi-supervised learning approach is proposed learning from limited annotated data while exploiting additional unannotated data with a pixel-level loss. U-shaped GAN is extended to UDA by taking the source and target domain data as the annotated data and the unannotated data in the semi-supervised learning approach, respectively. Compared to the previous models dealing with the aforementioned problems separately, U-shaped GAN is compatible with varying data distributions of multiple medical centers, with efficient training and optimizing performance. U-shaped GAN can be generalized to chest radiograph segmentation for clinical deployment. We evaluate U-shaped GAN with two chest radiograph datasets. U-shaped GAN is shown to significantly outperform the state-of-the-art models.
Collapse
Affiliation(s)
- Hongyu Wang
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Hong Gu
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Pan Qin
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Jia Wang
- Department of Surgery, The Second Hospital of Dalian Medical University, Dalian, China
| |
Collapse
|
34
|
Sarti M, Parlani M, Diaz-Gomez L, Mikos AG, Cerveri P, Casarin S, Dondossola E. Deep Learning for Automated Analysis of Cellular and Extracellular Components of the Foreign Body Response in Multiphoton Microscopy Images. Front Bioeng Biotechnol 2022; 9:797555. [PMID: 35145962 PMCID: PMC8822221 DOI: 10.3389/fbioe.2021.797555] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 12/28/2021] [Indexed: 12/02/2022] Open
Abstract
The Foreign body response (FBR) is a major unresolved challenge that compromises medical implant integration and function by inflammation and fibrotic encapsulation. Mice implanted with polymeric scaffolds coupled to intravital non-linear multiphoton microscopy acquisition enable multiparametric, longitudinal investigation of the FBR evolution and interference strategies. However, follow-up analyses based on visual localization and manual segmentation are extremely time-consuming, subject to human error, and do not allow for automated parameter extraction. We developed an integrated computational pipeline based on an innovative and versatile variant of the U-Net neural network to segment and quantify cellular and extracellular structures of interest, which is maintained across different objectives without impairing accuracy. This software for automatically detecting the elements of the FBR shows promise to unravel the complexity of this pathophysiological process.
Collapse
Affiliation(s)
- Mattia Sarti
- Department of Electronics, Information and Bioengineering, Politecnico di Milano University, Milan, Italy
| | - Maria Parlani
- David H. Koch Center for Applied Research of Genitourinary Cancers and Genitourinary Medical Oncology Department, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Department of Cell Biology, Radboud University Medical Center, Nijmegen, Netherlands
| | - Luis Diaz-Gomez
- Rice University, Dept. of Bioengineering, Houston, TX, United States
| | - Antonios G. Mikos
- Rice University, Dept. of Bioengineering, Houston, TX, United States
| | - Pietro Cerveri
- Department of Electronics, Information and Bioengineering, Politecnico di Milano University, Milan, Italy
| | - Stefano Casarin
- Center for Computational Surgery, Houston Methodist Research Institute, Houston, TX, United States
- Department of Surgery, Houston Methodist Hospital, Houston, TX, United States
- Houston Methodist Academic Institute, Houston, TX, United States
| | - Eleonora Dondossola
- David H. Koch Center for Applied Research of Genitourinary Cancers and Genitourinary Medical Oncology Department, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| |
Collapse
|
35
|
Modified U-NET Architecture for Segmentation of Skin Lesion. SENSORS 2022; 22:s22030867. [PMID: 35161613 PMCID: PMC8838042 DOI: 10.3390/s22030867] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 01/17/2022] [Accepted: 01/20/2022] [Indexed: 11/17/2022]
Abstract
Dermoscopy images can be classified more accurately if skin lesions or nodules are segmented. Because of their fuzzy borders, irregular boundaries, inter- and intra-class variances, and so on, nodule segmentation is a difficult task. For the segmentation of skin lesions from dermoscopic pictures, several algorithms have been developed. However, their accuracy lags well behind the industry standard. In this paper, a modified U-Net architecture is proposed by modifying the feature map’s dimension for an accurate and automatic segmentation of dermoscopic images. Apart from this, more kernels to the feature map allowed for a more precise extraction of the nodule. We evaluated the effectiveness of the proposed model by considering several hyper parameters such as epochs, batch size, and the types of optimizers, testing it with augmentation techniques implemented to enhance the amount of photos available in the PH2 dataset. The best performance achieved by the proposed model is with an Adam optimizer using a batch size of 8 and 75 epochs.
Collapse
|
36
|
Agrawal T, Choudhary P. Segmentation and classification on chest radiography: a systematic survey. THE VISUAL COMPUTER 2022; 39:875-913. [PMID: 35035008 PMCID: PMC8741572 DOI: 10.1007/s00371-021-02352-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 11/01/2021] [Indexed: 06/14/2023]
Abstract
Chest radiography (X-ray) is the most common diagnostic method for pulmonary disorders. A trained radiologist is required for interpreting the radiographs. But sometimes, even experienced radiologists can misinterpret the findings. This leads to the need for computer-aided detection diagnosis. For decades, researchers were automatically detecting pulmonary disorders using the traditional computer vision (CV) methods. Now the availability of large annotated datasets and computing hardware has made it possible for deep learning to dominate the area. It is now the modus operandi for feature extraction, segmentation, detection, and classification tasks in medical imaging analysis. This paper focuses on the research conducted using chest X-rays for the lung segmentation and detection/classification of pulmonary disorders on publicly available datasets. The studies performed using the Generative Adversarial Network (GAN) models for segmentation and classification on chest X-rays are also included in this study. GAN has gained the interest of the CV community as it can help with medical data scarcity. In this study, we have also included the research conducted before the popularity of deep learning models to have a clear picture of the field. Many surveys have been published, but none of them is dedicated to chest X-rays. This study will help the readers to know about the existing techniques, approaches, and their significance.
Collapse
Affiliation(s)
- Tarun Agrawal
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| | - Prakash Choudhary
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| |
Collapse
|
37
|
Arora R, Saini I, Sood N. Multi-label segmentation and detection of COVID-19 abnormalities from chest radiographs using deep learning. OPTIK 2021; 246:167780. [PMID: 34393275 PMCID: PMC8349421 DOI: 10.1016/j.ijleo.2021.167780] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 07/09/2021] [Accepted: 08/03/2021] [Indexed: 06/01/2023]
Abstract
Due to COVID-19, demand for Chest Radiographs (CXRs) have increased exponentially. Therefore, we present a novel fully automatic modified Attention U-Net (CXAU-Net) multi-class segmentation deep model that can detect common findings of COVID-19 in CXR images. The architectural design of this model includes three novelties: first, an Attention U-net model with channel and spatial attention blocks is designed that precisely localize multiple pathologies; second, dilated convolution applied improves the sensitivity of the model to foreground pixels with additional receptive fields valuation, and third a newly proposed hybrid loss function combines both area and size information for optimizing model. The proposed model achieves average accuracy, DSC, and Jaccard index scores of 0.951, 0.993, 0.984, and 0.921, 0.985, 0.973 for image-based and patch-based approaches respectively for multi-class segmentation on Chest X-ray 14 dataset. Also, average DSC and Jaccard index scores of 0.998, 0.989 are achieved for binary-class segmentation on the Japanese Society of Radiological Technology (JSRT) CXR dataset. These results illustrate that the proposed model outperformed the state-of-the-art segmentation methods.
Collapse
Affiliation(s)
- Ruchika Arora
- Department of Electronics and Communication Engineering, Dr. B. R. Ambedkar National Institute of Technology Jalandhar, Jalandhar 144011, India
| | - Indu Saini
- Department of Electronics and Communication Engineering, Dr. B. R. Ambedkar National Institute of Technology Jalandhar, Jalandhar 144011, India
| | - Neetu Sood
- Department of Electronics and Communication Engineering, Dr. B. R. Ambedkar National Institute of Technology Jalandhar, Jalandhar 144011, India
| |
Collapse
|
38
|
Wang B, Takeda T, Sugimoto K, Zhang J, Wada S, Konishi S, Manabe S, Okada K, Matsumura Y. Automatic creation of annotations for chest radiographs based on the positional information extracted from radiographic image reports. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 209:106331. [PMID: 34418813 DOI: 10.1016/j.cmpb.2021.106331] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Accepted: 07/28/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE In this study, we tried to create a machine-learning method that detects disease lesions from chest X-ray (CXR) images using a data set annotated with extracted CXR reports information. We set the nodule as the target disease lesion. Manually annotating nodules is costly in terms of time. Therefore, we used the report information to automatically produce training data for the object detection task. METHODS First, we use semantic segmentation model PSP-Net to recognize lung fields described in the CXR reports. Next, a classification model ResNeSt-50 is used to discriminate the nodule in segmented right and left field. It also can provide attention map by Grad-Cam. If the attention region corresponds to the location of the nodule in the CXR reports, an attention bounding box is generated. Finally, object detection model Faster-RCNN was performed using generated attention bounding box. The bounding boxes predicted by Faster-RCNN were filtered to satisfy the location extracted from CXR reports. RESULTS For lung field segmentation, a mean intersection of union of 0.889 was achieved in our best model. 15,156 chest radiographs are used for classification. The area under the receiver operating characteristics curve was 0.843 and 0.852 for the left and right lung, respectively. The detection precision of the generated attention bounding box was 0.341 to 0.531 depending on the binary setting for attention map. Through object detection process, the detection precisions of the bounding boxes were improved to 0.567 to 0.800. CONCLUSION We successfully generated bounding boxes with nodule on CXR images based on the positional information of the diseases extracted from the CXR reports. Our method has the potential to provide bounding boxes for various lung lesions which can reduce the annotation burden for specialists. SHORT ABSTRACT Machine learning for computer aided image diagnosis requires annotation of images, but manual annotation is time-consuming for medical doctor. In this study, we tried to create a machine-learning method that creates bounding boxes with disease lesions on chest X-ray (CXR) images using the positional information extracted from CXR reports. We set the nodule as the target lesion. First, we use PSP-Net to segment the lung field according to the CXR reports. Next, a classification model ResNeSt-50 was used to discriminate the nodule in segmented lung field. We also created an attention map using the Grad-Cam algorithm. If the area of attention matched the area annotated by the CXR report, the coordinate of the bounding box was considered as a possible nodule area. Finally, we used the attention information obtained from the nodule classification model and let the object detection model trained by all of the generated bounding boxes. Through object detection model, the precision of the bounding boxes to detect nodule is improved.
Collapse
Affiliation(s)
- Bowen Wang
- Department of Medical Informatics, Osaka University Graduate School of Medicine, Japan.
| | - Toshihiro Takeda
- Department of Medical Informatics, Osaka University Graduate School of Medicine, Japan.
| | - Kento Sugimoto
- Department of Medical Informatics, Osaka University Graduate School of Medicine, Japan.
| | - Jiahao Zhang
- Department of Medical Informatics, Osaka University Graduate School of Medicine, Japan.
| | - Shoya Wada
- Department of Medical Informatics, Osaka University Graduate School of Medicine, Japan.
| | - Shozo Konishi
- Department of Medical Informatics, Osaka University Graduate School of Medicine, Japan.
| | - Shirou Manabe
- Department of Medical Informatics, Osaka University Graduate School of Medicine, Japan.
| | - Katsuki Okada
- Department of Medical Informatics, Osaka University Graduate School of Medicine, Japan.
| | - Yasushi Matsumura
- Department of Medical Informatics, Osaka University Graduate School of Medicine, Japan.
| |
Collapse
|
39
|
Gómez Ó, Mesejo P, Ibáñez Ó. Automatic segmentation of skeletal structures in X-ray images using deep learning for comparative radiography. FORENSIC IMAGING 2021. [DOI: 10.1016/j.fri.2021.200458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
40
|
Kim YJ, Lee SR, Choi JY, Kim KG. Using Convolutional Neural Network with Taguchi Parametric Optimization for Knee Segmentation from X-Ray Images. BIOMED RESEARCH INTERNATIONAL 2021; 2021:5521009. [PMID: 34476259 PMCID: PMC8408001 DOI: 10.1155/2021/5521009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2021] [Revised: 05/15/2021] [Accepted: 08/09/2021] [Indexed: 11/17/2022]
Abstract
Loss of knee cartilage can cause intense pain at the knee epiphysis and this is one of the most common diseases worldwide. To diagnose this condition, the distance between the femur and tibia is calculated based on X-ray images. Accurate segmentation of the femur and tibia is required to assist in the calculation process. Several studies have investigated the use of automatic knee segmentation to assist in the calculation process, but the results are of limited value owing to the complexity of the knee. To address this problem, this study exploits deep learning for robust segmentation not affected by the environment. In addition, the Taguchi method is applied to optimize the deep learning results. Deep learning architecture, optimizer, and learning rate are considered for the Taguchi table to check the impact and interaction of the results. When the Dilated-Resnet architecture is used with the Adam optimizer and a learning rate of 0.001, dice coefficients of 0.964 and 0.942 are obtained for the femur and tibia for knee segmentation. The implemented procedure and the results of this investigation may be beneficial to help in determining the correct margins for the femur and tibia and can be the basis for developing an automatic diagnosis algorithm for orthopedic diseases.
Collapse
Affiliation(s)
- Young Jae Kim
- Department of Biomedical Engineering, Gil Medical Center, Gachon University College of Medicine, Incheon 21565, Republic of Korea
| | - Seung Ro Lee
- Department of Biomedical Engineering, Gil Medical Center, Gachon University College of Medicine, Incheon 21565, Republic of Korea
| | - Ja-Young Choi
- Department of Radiology, Seoul National University Hospital, Seoul 03080, Republic of Korea
| | - Kwang Gi Kim
- Department of Biomedical Engineering, Gil Medical Center, Gachon University College of Medicine, Incheon 21565, Republic of Korea
| |
Collapse
|
41
|
Shinohara H, Kodera S, Ninomiya K, Nakamoto M, Katsushika S, Saito A, Minatsuki S, Kikuchi H, Kiyosue A, Higashikuni Y, Takeda N, Fujiu K, Ando J, Akazawa H, Morita H, Komuro I. Automatic detection of vessel structure by deep learning using intravascular ultrasound images of the coronary arteries. PLoS One 2021; 16:e0255577. [PMID: 34351974 PMCID: PMC8341597 DOI: 10.1371/journal.pone.0255577] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Accepted: 07/19/2021] [Indexed: 11/18/2022] Open
Abstract
Intravascular ultrasound (IVUS) is a diagnostic modality used during percutaneous coronary intervention. However, specialist skills are required to interpret IVUS images. To address this issue, we developed a new artificial intelligence (AI) program that categorizes vessel components, including calcification and stents, seen in IVUS images of complex lesions. When developing our AI using U-Net, IVUS images were taken from patients with angina pectoris and were manually segmented into the following categories: lumen area, medial plus plaque area, calcification, and stent. To evaluate our AI's performance, we calculated the classification accuracy of vessel components in IVUS images of vessels with clinically significantly narrowed lumina (< 4 mm2) and those with severe calcification. Additionally, we assessed the correlation between lumen areas in manually-labeled ground truth images and those in AI-predicted images, the mean intersection over union (IoU) of a test set, and the recall score for detecting stent struts in each IVUS image in which a stent was present in the test set. Among 3738 labeled images, 323 were randomly selected for use as a test set. The remaining 3415 images were used for training. The classification accuracies for vessels with significantly narrowed lumina and those with severe calcification were 0.97 and 0.98, respectively. Additionally, there was a significant correlation in the lumen area between the ground truth images and the predicted images (ρ = 0.97, R2 = 0.97, p < 0.001). However, the mean IoU of the test set was 0.66 and the recall score for detecting stent struts was 0.64. Our AI program accurately classified vessels requiring treatment and vessel components, except for stents in IVUS images of complex lesions. AI may be a powerful tool for assisting in the interpretation of IVUS imaging and could promote the popularization of IVUS-guided percutaneous coronary intervention in a clinical setting.
Collapse
Affiliation(s)
- Hiroki Shinohara
- Department of Cardiovascular Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Satoshi Kodera
- Department of Cardiovascular Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Kota Ninomiya
- Department of Cardiovascular Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Mitsuhiko Nakamoto
- Department of Cardiovascular Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Susumu Katsushika
- Department of Cardiovascular Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Akihito Saito
- Department of Cardiovascular Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Shun Minatsuki
- Department of Cardiovascular Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Hironobu Kikuchi
- Department of Cardiovascular Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Arihiro Kiyosue
- Department of Cardiovascular Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Yasutomi Higashikuni
- Department of Cardiovascular Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Norifumi Takeda
- Department of Cardiovascular Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Katsuhito Fujiu
- Department of Cardiovascular Medicine, The University of Tokyo Hospital, Tokyo, Japan
- Department of Advanced Cardiology, The University of Tokyo, Tokyo, Japan
| | - Jiro Ando
- Department of Cardiovascular Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Hiroshi Akazawa
- Department of Cardiovascular Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Hiroyuki Morita
- Department of Cardiovascular Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Issei Komuro
- Department of Cardiovascular Medicine, The University of Tokyo Hospital, Tokyo, Japan
| |
Collapse
|
42
|
Hirashima H, Nakamura M, Baillehache P, Fujimoto Y, Nakagawa S, Saruya Y, Kabasawa T, Mizowaki T. Development of in-house fully residual deep convolutional neural network-based segmentation software for the male pelvic CT. Radiat Oncol 2021; 16:135. [PMID: 34294090 PMCID: PMC8299691 DOI: 10.1186/s13014-021-01867-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 07/19/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND This study aimed to (1) develop a fully residual deep convolutional neural network (CNN)-based segmentation software for computed tomography image segmentation of the male pelvic region and (2) demonstrate its efficiency in the male pelvic region. METHODS A total of 470 prostate cancer patients who had undergone intensity-modulated radiotherapy or volumetric-modulated arc therapy were enrolled. Our model was based on FusionNet, a fully residual deep CNN developed to semantically segment biological images. To develop the CNN-based segmentation software, 450 patients were randomly selected and separated into the training, validation and testing groups (270, 90, and 90 patients, respectively). In Experiment 1, to determine the optimal model, we first assessed the segmentation accuracy according to the size of the training dataset (90, 180, and 270 patients). In Experiment 2, the effect of varying the number of training labels on segmentation accuracy was evaluated. After determining the optimal model, in Experiment 3, the developed software was used on the remaining 20 datasets to assess the segmentation accuracy. The volumetric dice similarity coefficient (DSC) and the 95th-percentile Hausdorff distance (95%HD) were calculated to evaluate the segmentation accuracy for each organ in Experiment 3. RESULTS In Experiment 1, the median DSC for the prostate were 0.61 for dataset 1 (90 patients), 0.86 for dataset 2 (180 patients), and 0.86 for dataset 3 (270 patients), respectively. The median DSCs for all the organs increased significantly when the number of training cases increased from 90 to 180 but did not improve upon further increase from 180 to 270. The number of labels applied during training had a little effect on the DSCs in Experiment 2. The optimal model was built by 270 patients and four organs. In Experiment 3, the median of the DSC and the 95%HD values were 0.82 and 3.23 mm for prostate; 0.71 and 3.82 mm for seminal vesicles; 0.89 and 2.65 mm for the rectum; 0.95 and 4.18 mm for the bladder, respectively. CONCLUSIONS We have developed a CNN-based segmentation software for the male pelvic region and demonstrated that the CNN-based segmentation software is efficient for the male pelvic region.
Collapse
Affiliation(s)
- Hideaki Hirashima
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Mitsuhiro Nakamura
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan. .,Division of Medical Physics, Department of Information Technology and Medical Engineering, Human Health Sciences, Graduate School of Medicine, Kyoto University, 53 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan.
| | - Pascal Baillehache
- Rist, Inc., Impact HUB Tokyo, 2-11-3 Meguro, Meguro-ku, Tokyo, 153-0063, Japan
| | - Yusuke Fujimoto
- Rist, Inc., Impact HUB Tokyo, 2-11-3 Meguro, Meguro-ku, Tokyo, 153-0063, Japan
| | - Shota Nakagawa
- Rist, Inc., Impact HUB Tokyo, 2-11-3 Meguro, Meguro-ku, Tokyo, 153-0063, Japan
| | - Yusuke Saruya
- Rist, Inc., Impact HUB Tokyo, 2-11-3 Meguro, Meguro-ku, Tokyo, 153-0063, Japan
| | - Tatsumasa Kabasawa
- Rist, Inc., Impact HUB Tokyo, 2-11-3 Meguro, Meguro-ku, Tokyo, 153-0063, Japan
| | - Takashi Mizowaki
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan
| |
Collapse
|
43
|
Raj R, Londhe ND, Sonawane R. Automated psoriasis lesion segmentation from unconstrained environment using residual U-Net with transfer learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 206:106123. [PMID: 33975181 DOI: 10.1016/j.cmpb.2021.106123] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 04/18/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE The automatic segmentation of psoriasis lesions from digital images is a challenging task due to the unconstrained imaging environment and non-uniform background. Existing conventional or machine learning-based image processing methods for automatic psoriasis lesion segmentation have several limitations, such as dependency on manual features, human intervention, less and unreliable performance with an increase in data, manual pre-processing steps for removal of background or other artifacts, etc. METHODS: In this paper, we propose a fully automatic approach based on a deep learning model using the transfer learning paradigm for the segmentation of psoriasis lesions from the digital images of different body regions of the psoriasis patients. The proposed model is based on U-Net architecture whose encoder path utilizes a pre-trained residual network model as a backbone. The proposed model is retrained with a self-prepared psoriasis dataset and corresponding segmentation annotation of the lesion. RESULTS The performance of the proposed method is evaluated using a five-fold cross-validation technique. The proposed method achieves an average Dice Similarity Index of 0.948 and Jaccard Index of 0.901 for the intended task. The transfer learning provides an improvement in the segmentation performance of about 4.4% and 7.6% in Dice Similarity Index and Jaccard Index metric respectively, as compared to the training of the proposed model from scratch. CONCLUSIONS An extensive comparative analysis with the state-of-the-art segmentation models and existing literature validates the promising performance of the proposed framework. Hence, our proposed method will provide a basis for an objective area assessment of psoriasis lesions.
Collapse
Affiliation(s)
- Ritesh Raj
- Electrical Engineering Department, National Institute of Technology Raipur, Raipur, Chhattisgarh, 492010, India
| | - Narendra D Londhe
- Electrical Engineering Department, National Institute of Technology Raipur, Raipur, Chhattisgarh, 492010, India.
| | - Rajendra Sonawane
- Psoriasis Clinic and Research Centre, Psoriatreat, Pune, Maharashtra, 411004, India
| |
Collapse
|
44
|
Singh A, Lall B, Panigrahi B, Agrawal A, Agrawal A, Thangakunam B, Christopher D. Deep LF-Net: Semantic lung segmentation from Indian chest radiographs including severely unhealthy images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102666] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
45
|
Çallı E, Sogancioglu E, van Ginneken B, van Leeuwen KG, Murphy K. Deep learning for chest X-ray analysis: A survey. Med Image Anal 2021; 72:102125. [PMID: 34171622 DOI: 10.1016/j.media.2021.102125] [Citation(s) in RCA: 126] [Impact Index Per Article: 31.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 05/17/2021] [Accepted: 05/27/2021] [Indexed: 12/14/2022]
Abstract
Recent advances in deep learning have led to a promising performance in many medical image analysis tasks. As the most commonly performed radiological exam, chest radiographs are a particularly important modality for which a variety of applications have been researched. The release of multiple, large, publicly available chest X-ray datasets in recent years has encouraged research interest and boosted the number of publications. In this paper, we review all studies using deep learning on chest radiographs published before March 2021, categorizing works by task: image-level prediction (classification and regression), segmentation, localization, image generation and domain adaptation. Detailed descriptions of all publicly available datasets are included and commercial systems in the field are described. A comprehensive discussion of the current state of the art is provided, including caveats on the use of public datasets, the requirements of clinically useful systems and gaps in the current literature.
Collapse
Affiliation(s)
- Erdi Çallı
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands.
| | - Ecem Sogancioglu
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Kicky G van Leeuwen
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Keelin Murphy
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| |
Collapse
|
46
|
Tolkachev A, Sirazitdinov I, Kholiavchenko M, Mustafaev T, Ibragimov B. Deep Learning for Diagnosis and Segmentation of Pneumothorax: The Results on the Kaggle Competition and Validation Against Radiologists. IEEE J Biomed Health Inform 2021; 25:1660-1672. [PMID: 32956067 DOI: 10.1109/jbhi.2020.3023476] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Pneumothorax is potentially a life-threatening disease that requires urgent diagnosis and treatment. The chest X-ray is the diagnostic modality of choice when pneumothorax is suspected. The computer-aided diagnosis of pneumothorax has received a dramatic boost in the last few years due to deep learning advances and the first public pneumothorax diagnosis competition with 15257 chest X-rays manually annotated by a team of 19 radiologists. This paper describes one of the top frameworks that participated in the competition. The framework investigates the benefits of combining the Unet convolutional neural network with various backbones, namely ResNet34, SE-ResNext50, SE-ResNext101, and DenseNet121. The paper presents a step-by-step instruction for the framework application, including data augmentation, and different pre- and post-processing steps. The performance of the framework was of 0.8574 measured in terms of the Dice coefficient. The second contribution of the paper is the comparison of the deep learning framework against three experienced radiologists on the pneumothorax detection and segmentation on challenging X-rays. We also evaluated how diagnostic confidence of radiologists affects the accuracy of the diagnosis and observed that the deep learning framework and radiologists find the same X-rays to be easy/difficult to analyze (p-value <1e4). Finally, the methodology of all top-performing teams from the competition leaderboard was analyzed to find the consistent methodological patterns of accurate pneumothorax detection and segmentation.
Collapse
|
47
|
Howell RS, Liu HH, Khan AA, Woods JS, Lin LJ, Saxena M, Saxena H, Castellano M, Petrone P, Slone E, Chiu ES, Gillette BM, Gorenstein SA. Development of a Method for Clinical Evaluation of Artificial Intelligence-Based Digital Wound Assessment Tools. JAMA Netw Open 2021; 4:e217234. [PMID: 34009348 PMCID: PMC8134996 DOI: 10.1001/jamanetworkopen.2021.7234] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
IMPORTANCE Accurate assessment of wound area and percentage of granulation tissue (PGT) are important for optimizing wound care and healing outcomes. Artificial intelligence (AI)-based wound assessment tools have the potential to improve the accuracy and consistency of wound area and PGT measurement, while improving efficiency of wound care workflows. OBJECTIVE To develop a quantitative and qualitative method to evaluate AI-based wound assessment tools compared with expert human assessments. DESIGN, SETTING, AND PARTICIPANTS This diagnostic study was performed across 2 independent wound centers using deidentified wound photographs collected for routine care (site 1, 110 photographs taken between May 1 and 31, 2018; site 2, 89 photographs taken between January 1 and December 31, 2019). Digital wound photographs of patients were selected chronologically from the electronic medical records from the general population of patients visiting the wound centers. For inclusion in the study, the complete wound edge and a ruler were required to be visible; circumferential ulcers were specifically excluded. Four wound specialists (2 per site) and an AI-based wound assessment service independently traced wound area and granulation tissue. MAIN OUTCOMES AND MEASURES The quantitative performance of AI tracings was evaluated by statistically comparing error measure distributions between test AI traces and reference human traces (AI vs human) with error distributions between independent traces by 2 humans (human vs human). Quantitative outcomes included statistically significant differences in error measures of false-negative area (FNA), false-positive area (FPA), and absolute relative error (ARE) between AI vs human and human vs human comparisons of wound area and granulation tissue tracings. Six masked attending physician reviewers (3 per site) viewed randomized area tracings for AI and human annotators and qualitatively assessed them. Qualitative outcomes included statistically significant difference in the absolute difference between AI-based PGT measurements and mean reviewer visual PGT estimates compared with PGT estimate variability measures (ie, range, standard deviation) across reviewers. RESULTS A total of 199 photographs were selected for the study across both sites; mean (SD) patient age was 64 (18) years (range, 17-95 years) and 127 (63.8%) were women. The comparisons of AI vs human with human vs human for FPA and ARE were not statistically significant. AI vs human FNA was slightly elevated compared with human vs human FNA (median [IQR], 7.7% [2.7%-21.2%] vs 5.7% [1.6%-14.9%]; P < .001), indicating that AI traces tended to slightly underestimate the human reference wound boundaries compared with human test traces. Two of 6 reviewers had a statistically higher frequency in agreement that human tracings met the standard area definition, but overall agreement was moderate (352 yes responses of 583 total responses [60.4%] for AI and 793 yes responses of 1166 total responses [68.0%] for human tracings). AI PGT measurements fell in the typical range of variation in interreviewer visual PGT estimates; however, visual PGT estimates varied considerably (mean range, 34.8%; mean SD, 19.6%). CONCLUSIONS AND RELEVANCE This study provides a framework for evaluating AI-based digital wound assessment tools that can be extended to automated measurements of other wound features or adapted to evaluate other AI-based digital image diagnostic tools. As AI-based wound assessment tools become more common across wound care settings, it will be important to rigorously validate their performance in helping clinicians obtain accurate wound assessments to guide clinical care.
Collapse
Affiliation(s)
- Raelina S. Howell
- Department of Surgery, NYU Langone Hospital Long Island, Mineola, New York
| | - Helen H. Liu
- Department of Surgery, NYU Langone Hospital Long Island, Mineola, New York
| | - Aziz A. Khan
- Department of Surgery, NYU Langone Hospital Long Island, Mineola, New York
| | - Jon S. Woods
- Department of Surgery, NYU Langone Hospital Long Island, Mineola, New York
| | - Lawrence J. Lin
- NYU Kimmel Hyperbaric and Advanced Wound Healing Center, New York, New York
| | | | | | - Michael Castellano
- Department of Surgery, NYU Langone Hospital Long Island, Mineola, New York
- Department of Surgery, NYU Long Island School of Medicine, Mineola, New York
| | - Patrizio Petrone
- Department of Surgery, NYU Langone Hospital Long Island, Mineola, New York
- Department of Surgery, NYU Long Island School of Medicine, Mineola, New York
| | - Eric Slone
- Department of Surgery, NYU Langone Hospital Long Island, Mineola, New York
| | - Ernest S. Chiu
- NYU Kimmel Hyperbaric and Advanced Wound Healing Center, New York, New York
| | - Brian M. Gillette
- Department of Surgery, NYU Langone Hospital Long Island, Mineola, New York
- Department of Foundations of Medicine, NYU Long Island School of Medicine, Mineola, New York
| | - Scott A. Gorenstein
- Department of Surgery, NYU Langone Hospital Long Island, Mineola, New York
- Department of Surgery, NYU Long Island School of Medicine, Mineola, New York
| |
Collapse
|
48
|
Wang H, Minnema J, Batenburg KJ, Forouzanfar T, Hu FJ, Wu G. Multiclass CBCT Image Segmentation for Orthodontics with Deep Learning. J Dent Res 2021; 100:943-949. [PMID: 33783247 PMCID: PMC8293763 DOI: 10.1177/00220345211005338] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Accurate segmentation of the jaw (i.e., mandible and maxilla) and the teeth in cone beam computed tomography (CBCT) scans is essential for orthodontic diagnosis and treatment planning. Although various (semi)automated methods have been proposed to segment the jaw or the teeth, there is still a lack of fully automated segmentation methods that can simultaneously segment both anatomic structures in CBCT scans (i.e., multiclass segmentation). In this study, we aimed to train and validate a mixed-scale dense (MS-D) convolutional neural network for multiclass segmentation of the jaw, the teeth, and the background in CBCT scans. Thirty CBCT scans were obtained from patients who had undergone orthodontic treatment. Gold standard segmentation labels were manually created by 4 dentists. As a benchmark, we also evaluated MS-D networks that segmented the jaw or the teeth (i.e., binary segmentation). All segmented CBCT scans were converted to virtual 3-dimensional (3D) models. The segmentation performance of all trained MS-D networks was assessed by the Dice similarity coefficient and surface deviation. The CBCT scans segmented by the MS-D network demonstrated a large overlap with the gold standard segmentations (Dice similarity coefficient: 0.934 ± 0.019, jaw; 0.945 ± 0.021, teeth). The MS-D network–based 3D models of the jaw and the teeth showed minor surface deviations when compared with the corresponding gold standard 3D models (0.390 ± 0.093 mm, jaw; 0.204 ± 0.061 mm, teeth). The MS-D network took approximately 25 s to segment 1 CBCT scan, whereas manual segmentation took about 5 h. This study showed that multiclass segmentation of jaw and teeth was accurate and its performance was comparable to binary segmentation. The MS-D network trained for multiclass segmentation would therefore make patient-specific orthodontic treatment more feasible by strongly reducing the time required to segment multiple anatomic structures in CBCT scans.
Collapse
Affiliation(s)
- H Wang
- Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovation Lab, Amsterdam Movement Sciences, Amsterdam UMC, Academic Centre for Dentistry Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - J Minnema
- Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovation Lab, Amsterdam Movement Sciences, Amsterdam UMC, Academic Centre for Dentistry Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - K J Batenburg
- Centrum Wiskunde and Informatica, Amsterdam, the Netherlands
| | - T Forouzanfar
- Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovation Lab, Amsterdam Movement Sciences, Amsterdam UMC, Academic Centre for Dentistry Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - F J Hu
- Institute of Information Technology, Zhejiang Shuren University, Hangzhou, China
| | - G Wu
- Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovation Lab, Amsterdam Movement Sciences, Amsterdam UMC, Academic Centre for Dentistry Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands.,Department of Oral Implantology and Prosthetic Dentistry, Academic Centre for Dentistry Amsterdam, University of Amsterdam and Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
49
|
Cha JY, Yoon HI, Yeo IS, Huh KH, Han JS. Peri-Implant Bone Loss Measurement Using a Region-Based Convolutional Neural Network on Dental Periapical Radiographs. J Clin Med 2021; 10:1009. [PMID: 33801384 PMCID: PMC7958615 DOI: 10.3390/jcm10051009] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 02/23/2021] [Accepted: 02/24/2021] [Indexed: 01/06/2023] Open
Abstract
Determining the peri-implant marginal bone level on radiographs is challenging because the boundaries of the bones around implants are often unclear or the heights of the buccal and lingual bone levels are different. Therefore, a deep convolutional neural network (CNN) was evaluated for detecting the marginal bone level, top, and apex of implants on dental periapical radiographs. An automated assistant system was proposed for calculating the bone loss percentage and classifying the bone resorption severity. A modified region-based CNN (R-CNN) was trained using transfer learning based on Microsoft Common Objects in Context dataset. Overall, 708 periapical radiographic images were divided into training (n = 508), validation (n = 100), and test (n = 100) datasets. The training dataset was randomly enriched by data augmentation. For evaluation, average precision, average recall, and mean object keypoint similarity (OKS) were calculated, and the mean OKS values of the model and a dental clinician were compared. Using detected keypoints, radiographic bone loss was measured and classified. No statistically significant difference was found between the modified R-CNN model and dental clinician for detecting landmarks around dental implants. The modified R-CNN model can be utilized to measure the radiographic peri-implant bone loss ratio to assess the severity of peri-implantitis.
Collapse
Affiliation(s)
- Jun-Young Cha
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| | - Hyung-In Yoon
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| | - In-Sung Yeo
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea
| | - Jung-Suk Han
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| |
Collapse
|
50
|
Abstract
As an emerging biomedical image processing technology, medical image segmentation has made great contributions to sustainable medical care. Now it has become an important research direction in the field of computer vision. With the rapid development of deep learning, medical image processing based on deep convolutional neural networks has become a research hotspot. This paper focuses on the research of medical image segmentation based on deep learning. First, the basic ideas and characteristics of medical image segmentation based on deep learning are introduced. By explaining its research status and summarizing the three main methods of medical image segmentation and their own limitations, the future development direction is expanded. Based on the discussion of different pathological tissues and organs, the specificity between them and their classic segmentation algorithms are summarized. Despite the great achievements of medical image segmentation in recent years, medical image segmentation based on deep learning has still encountered difficulties in research. For example, the segmentation accuracy is not high, the number of medical images in the data set is small and the resolution is low. The inaccurate segmentation results are unable to meet the actual clinical requirements. Aiming at the above problems, a comprehensive review of current medical image segmentation methods based on deep learning is provided to help researchers solve existing problems.
Collapse
|