1
|
Wan M, Zhu J, Che Y, Cao X, Han X, Si X, Wang W, Shu C, Luo M, Zhang X. BIF-Net: Boundary information fusion network for abdominal aortic aneurysm segmentation. Comput Biol Med 2024; 183:109191. [PMID: 39393127 DOI: 10.1016/j.compbiomed.2024.109191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 08/21/2024] [Accepted: 09/21/2024] [Indexed: 10/13/2024]
Abstract
The accurate abdominal aortic aneurysm (AAA) segmentation is significant for assisting clinicians in diagnosis and treatment planning. However, existing segmentation methods exhibit a low utilization rate for the semantic information of vessel boundaries, which is disadvantageous for segmenting AAA with significant scale variability of vessel diameter (diameter ranges from 4 mm to 85 mm). To tackle this problem, we introduce a boundary information fusion network (BIF-Net) specially designed for AAA segmentation. BIF-Net initially constructs convolutional kernel functions based on Gabor and Sobel operators, enriching the global semantic features and localization information through the Gabor and Sobel dilated convolution (GSDC) module. Additionally, BIF-Net supplements lost boundary feature information during the sampling process through the guided filtering feature supplementation (GFFS) module and the channel-spatial attention module (CSAM), enhancing the ability to capture targets with shape diversity and boundary features. Finally, we introduce a boundary feature loss function to alleviate the impact of the imbalance between positive and negative samples. The results demonstrate that BIF-Net outperforms current state-of-the-art methods across multiple evaluation metrics, achieving the highest Dice similarity coefficient (DSC) accuracies of 93.29 % and 91.01 % on the preoperative and postoperative datasets, respectively. Compared to the state-of-the-art methods, BIF-Net improves DSC by 6.86 % and 3.85 %. Due to the powerful boundary feature extraction ability, the proposed BIF-Net is a competitive AAA segmentation method exhibiting significant potential for application in diagnosis and treatment processes of AAA.
Collapse
Affiliation(s)
- Mingyu Wan
- School of Mathematics and Physics, University of Science and Technology Beijing, Beijing, 100083, China.
| | - Jing Zhu
- School of Mathematics and Physics, University of Science and Technology Beijing, Beijing, 100083, China
| | - Yue Che
- School of Mathematics and Physics, University of Science and Technology Beijing, Beijing, 100083, China
| | - Xiran Cao
- School of Mathematics and Physics, University of Science and Technology Beijing, Beijing, 100083, China
| | - Xiao Han
- School of Mathematics and Physics, University of Science and Technology Beijing, Beijing, 100083, China
| | - Xinhui Si
- School of Mathematics and Physics, University of Science and Technology Beijing, Beijing, 100083, China
| | - Wei Wang
- Department of Radiology, Beijing Rehabilitation Hospital of Capital Medical University, Beijing, 100144, China
| | - Chang Shu
- Department of Vascular Surgery, Fuwai Hospital, Chinese Academy of Medical Science & Peking Union Medical College, Beijing, 100037, China; Department of Vascular Surgery, Second Xiangya Hospital, Central South University, Number 139, Renmin Road, Changsha, 410011, China.
| | - Mingyao Luo
- Department of Vascular Surgery, Fuwai Hospital, Chinese Academy of Medical Science & Peking Union Medical College, Beijing, 100037, China; Department of Vascular Surgery, Fuwai Yunnan Cardiovascular Hospital, Affiliated Cardiovascular Hospital of Kunming Medical University, Kunming, 650102, China.
| | - Xuelan Zhang
- School of Mathematics and Physics, University of Science and Technology Beijing, Beijing, 100083, China.
| |
Collapse
|
2
|
Wang G, Zhou M, Ning X, Tiwari P, Zhu H, Yang G, Yap CH. US2Mask: Image-to-mask generation learning via a conditional GAN for cardiac ultrasound image segmentation. Comput Biol Med 2024; 172:108282. [PMID: 38503085 DOI: 10.1016/j.compbiomed.2024.108282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 02/29/2024] [Accepted: 03/12/2024] [Indexed: 03/21/2024]
Abstract
Cardiac ultrasound (US) image segmentation is vital for evaluating clinical indices, but it often demands a large dataset and expert annotations, resulting in high costs for deep learning algorithms. To address this, our study presents a framework utilizing artificial intelligence generation technology to produce multi-class RGB masks for cardiac US image segmentation. The proposed approach directly performs semantic segmentation of the heart's main structures in US images from various scanning modes. Additionally, we introduce a novel learning approach based on conditional generative adversarial networks (CGAN) for cardiac US image segmentation, incorporating a conditional input and paired RGB masks. Experimental results from three cardiac US image datasets with diverse scan modes demonstrate that our approach outperforms several state-of-the-art models, showcasing improvements in five commonly used segmentation metrics, with lower noise sensitivity. Source code is available at https://github.com/energy588/US2mask.
Collapse
Affiliation(s)
- Gang Wang
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, Chongqing; Department of Bioengineering, Imperial College London, London, UK
| | - Mingliang Zhou
- School of Computer Science, Chongqing University, Chongqing, Chongqing.
| | - Xin Ning
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China
| | - Prayag Tiwari
- School of Information Technology, Halmstad University, Halmstad, Sweden
| | | | - Guang Yang
- Department of Bioengineering, Imperial College London, London, UK; Cardiovascular Research Centre, Royal Brompton Hospital, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - Choon Hwai Yap
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
3
|
Bi L, Buehner U, Fu X, Williamson T, Choong P, Kim J. Hybrid CNN-transformer network for interactive learning of challenging musculoskeletal images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107875. [PMID: 37871450 DOI: 10.1016/j.cmpb.2023.107875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Revised: 10/16/2023] [Accepted: 10/17/2023] [Indexed: 10/25/2023]
Abstract
BACKGROUND AND OBJECTIVES Segmentation of regions of interest (ROIs) such as tumors and bones plays an essential role in the analysis of musculoskeletal (MSK) images. Segmentation results can help with orthopaedic surgeons in surgical outcomes assessment and patient's gait cycle simulation. Deep learning-based automatic segmentation methods, particularly those using fully convolutional networks (FCNs), are considered as the state-of-the-art. However, in scenarios where the training data is insufficient to account for all the variations in ROIs, these methods struggle to segment the challenging ROIs that with less common image characteristics. Such characteristics might include low contrast to the background, inhomogeneous textures, and fuzzy boundaries. METHODS we propose a hybrid convolutional neural network - transformer network (HCTN) for semi-automatic segmentation to overcome the limitations of segmenting challenging MSK images. Specifically, we propose to fuse user-inputs (manual, e.g., mouse clicks) with high-level semantic image features derived from the neural network (automatic) where the user-inputs are used in an interactive training for uncommon image characteristics. In addition, we propose to leverage the transformer network (TN) - a deep learning model designed for handling sequence data, in together with features derived from FCNs for segmentation; this addresses the limitation of FCNs that can only operate on small kernels, which tends to dismiss global context and only focus on local patterns. RESULTS We purposely selected three MSK imaging datasets covering a variety of structures to evaluate the generalizability of the proposed method. Our semi-automatic HCTN method achieved a dice coefficient score (DSC) of 88.46 ± 9.41 for segmenting the soft-tissue sarcoma tumors from magnetic resonance (MR) images, 73.32 ± 11.97 for segmenting the osteosarcoma tumors from MR images and 93.93 ± 1.84 for segmenting the clavicle bones from chest radiographs. When compared to the current state-of-the-art automatic segmentation method, our HCTN method is 11.7%, 19.11% and 7.36% higher in DSC on the three datasets, respectively. CONCLUSION Our experimental results demonstrate that HCTN achieved more generalizable results than the current methods, especially with challenging MSK studies.
Collapse
Affiliation(s)
- Lei Bi
- Institute of Translational Medicine, National Center for Translational Medicine, Shanghai Jiao Tong University, Shanghai, China; School of Computer Science, University of Sydney, NSW, Australia
| | | | - Xiaohang Fu
- School of Computer Science, University of Sydney, NSW, Australia
| | - Tom Williamson
- Stryker Corporation, Kalamazoo, Michigan, USA; Centre for Additive Manufacturing, School of Engineering, RMIT University, VIC, Australia
| | - Peter Choong
- Department of Surgery, University of Melbourne, VIC, Australia
| | - Jinman Kim
- School of Computer Science, University of Sydney, NSW, Australia.
| |
Collapse
|
4
|
Ullah I, Ali F, Shah B, El-Sappagh S, Abuhmed T, Park SH. A deep learning based dual encoder-decoder framework for anatomical structure segmentation in chest X-ray images. Sci Rep 2023; 13:791. [PMID: 36646735 PMCID: PMC9842654 DOI: 10.1038/s41598-023-27815-w] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 01/09/2023] [Indexed: 01/18/2023] Open
Abstract
Automated multi-organ segmentation plays an essential part in the computer-aided diagnostic (CAD) of chest X-ray fluoroscopy. However, developing a CAD system for the anatomical structure segmentation remains challenging due to several indistinct structures, variations in the anatomical structure shape among different individuals, the presence of medical tools, such as pacemakers and catheters, and various artifacts in the chest radiographic images. In this paper, we propose a robust deep learning segmentation framework for the anatomical structure in chest radiographs that utilizes a dual encoder-decoder convolutional neural network (CNN). The first network in the dual encoder-decoder structure effectively utilizes a pre-trained VGG19 as an encoder for the segmentation task. The pre-trained encoder output is fed into the squeeze-and-excitation (SE) to boost the network's representation power, which enables it to perform dynamic channel-wise feature calibrations. The calibrated features are efficiently passed into the first decoder to generate the mask. We integrated the generated mask with the input image and passed it through a second encoder-decoder network with the recurrent residual blocks and an attention the gate module to capture the additional contextual features and improve the segmentation of the smaller regions. Three public chest X-ray datasets are used to evaluate the proposed method for multi-organs segmentation, such as the heart, lungs, and clavicles, and single-organ segmentation, which include only lungs. The results from the experiment show that our proposed technique outperformed the existing multi-class and single-class segmentation methods.
Collapse
Affiliation(s)
- Ihsan Ullah
- Department of Robotics and Mechatronics Engineering, Daegu Gyeonbuk Institute of Science and Engineering (DGIST), Daegu, 42988, South Korea
| | - Farman Ali
- Department of Computer Science and Engineering, School of Convergence, College of Computing and Informatics, Sungkyunkwan University, Seoul, 03063, South Korea
| | - Babar Shah
- College of Technological Innovation, Zayed University, Dubai, 19282, United Arab Emirates
| | - Shaker El-Sappagh
- Faculty of Computer Science and Engineering, Galala University, Suez, 435611, Egypt
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Banha, 13518, Egypt
| | - Tamer Abuhmed
- Department of Computer Science and Engineering, College of Computing and Informatics, Sungkyunkwan University, Suwon, 16419, South Korea
| | - Sang Hyun Park
- Department of Robotics and Mechatronics Engineering, Daegu Gyeonbuk Institute of Science and Engineering (DGIST), Daegu, 42988, South Korea.
| |
Collapse
|
5
|
Yang L, Gu Y, Huo B, Liu Y, Bian G. A shape-guided deep residual network for automated CT lung segmentation. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
6
|
Lung Field Segmentation in Chest X-ray Images Using Superpixel Resizing and Encoder–Decoder Segmentation Networks. Bioengineering (Basel) 2022; 9:bioengineering9080351. [PMID: 36004876 PMCID: PMC9404743 DOI: 10.3390/bioengineering9080351] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/24/2022] [Accepted: 07/26/2022] [Indexed: 11/25/2022] Open
Abstract
Lung segmentation of chest X-ray (CXR) images is a fundamental step in many diagnostic applications. Most lung field segmentation methods reduce the image size to speed up the subsequent processing time. Then, the low-resolution result is upsampled to the original high-resolution image. Nevertheless, the image boundaries become blurred after the downsampling and upsampling steps. It is necessary to alleviate blurred boundaries during downsampling and upsampling. In this paper, we incorporate the lung field segmentation with the superpixel resizing framework to achieve the goal. The superpixel resizing framework upsamples the segmentation results based on the superpixel boundary information obtained from the downsampling process. Using this method, not only can the computation time of high-resolution medical image segmentation be reduced, but also the quality of the segmentation results can be preserved. We evaluate the proposed method on JSRT, LIDC-IDRI, and ANH datasets. The experimental results show that the proposed superpixel resizing framework outperforms other traditional image resizing methods. Furthermore, combining the segmentation network and the superpixel resizing framework, the proposed method achieves better results with an average time score of 4.6 s on CPU and 0.02 s on GPU.
Collapse
|
7
|
Liu W, Luo J, Yang Y, Wang W, Deng J, Yu L. Automatic lung segmentation in chest X-ray images using improved U-Net. Sci Rep 2022; 12:8649. [PMID: 35606509 PMCID: PMC9127108 DOI: 10.1038/s41598-022-12743-y] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 05/16/2022] [Indexed: 11/08/2022] Open
Abstract
The automatic segmentation of the lung region for chest X-ray (CXR) can help doctors diagnose many lung diseases. However, extreme lung shape changes and fuzzy lung regions caused by serious lung diseases may incorrectly make the automatic lung segmentation model. We improved the U-Net network by using the pre-training Efficientnet-b4 as the encoder and the Residual block and the LeakyReLU activation function in the decoder. The network can extract Lung field features efficiently and avoid the gradient instability caused by the multiplication effect in gradient backpropagation. Compared with the traditional U-Net model, our method improves about 2.5% dice coefficient and 6% Jaccard Index for the two benchmark lung segmentation datasets. Our model improves about 5% dice coefficient and 9% Jaccard Index for the private lung segmentation datasets compared with the traditional U-Net model. Comparative experiments show that our method can improve the accuracy of lung segmentation of CXR images and it has a lower standard deviation and good robustness.
Collapse
Affiliation(s)
- Wufeng Liu
- Henan University of Technology, Zhengzhou, 450001, China.
| | - Jiaxin Luo
- Henan University of Technology, Zhengzhou, 450001, China
| | - Yan Yang
- Henan University of Technology, Zhengzhou, 450001, China
| | - Wenlian Wang
- Nanyang Central Hospital, Nanyang, 473009, China
| | - Junkui Deng
- Nanyang Central Hospital, Nanyang, 473009, China
| | - Liang Yu
- Henan University of Technology, Zhengzhou, 450001, China
| |
Collapse
|
8
|
Gómez Ó, Mesejo P, Ibáñez Ó, Cordón Ó. Deep architectures for the segmentation of frontal sinuses in X-ray images: Towards an automatic forensic identification system in comparative radiography. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.10.116] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
9
|
Gómez Ó, Mesejo P, Ibáñez Ó. Automatic segmentation of skeletal structures in X-ray images using deep learning for comparative radiography. FORENSIC IMAGING 2021. [DOI: 10.1016/j.fri.2021.200458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
10
|
Anatomic Point-Based Lung Region with Zone Identification for Radiologist Annotation and Machine Learning for Chest Radiographs. J Digit Imaging 2021; 34:922-931. [PMID: 34327625 DOI: 10.1007/s10278-021-00494-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 06/02/2021] [Accepted: 07/05/2021] [Indexed: 10/20/2022] Open
Abstract
Our objective is to investigate the reliability and usefulness of anatomic point-based lung zone segmentation on chest radiographs (CXRs) as a reference standard framework and to evaluate the accuracy of automated point placement. Two hundred frontal CXRs were presented to two radiologists who identified five anatomic points: two at the lung apices, one at the top of the aortic arch, and two at the costophrenic angles. Of these 1000 anatomic points, 161 (16.1%) were obscured (mostly by pleural effusions). Observer variations were investigated. Eight anatomic zones then were automatically generated from the manually placed anatomic points, and a prototype algorithm was developed using the point-based lung zone segmentation to detect cardiomegaly and levels of diaphragm and pleural effusions. A trained U-Net neural network was used to automatically place these five points within 379 CXRs of an independent database. Intra- and inter-observer variation in mean distance between corresponding anatomic points was larger for obscured points (8.7 mm and 20 mm, respectively) than for visible points (4.3 mm and 7.6 mm, respectively). The computer algorithm using the point-based lung zone segmentation could diagnostically measure the cardiothoracic ratio and diaphragm position or pleural effusion. The mean distance between corresponding points placed by the radiologist and by the neural network was 6.2 mm. The network identified 95% of the radiologist-indicated points with only 3% of network-identified points being false-positives. In conclusion, a reliable anatomic point-based lung segmentation method for CXRs has been developed with expected utility for establishing reference standards for machine learning applications.
Collapse
|
11
|
Boundary Restored Network for Subpleural Pulmonary Lesion Segmentation on Ultrasound Images at Local and Global Scales. J Digit Imaging 2021; 33:1155-1166. [PMID: 32556913 DOI: 10.1007/s10278-020-00356-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
To evaluate the application of machine learning for the detection of subpleural pulmonary lesions (SPLs) in ultrasound (US) scans, we propose a novel boundary-restored network (BRN) for automated SPL segmentation to avoid issues associated with manual SPL segmentation (subjectivity, manual segmentation errors, and high time consumption). In total, 1612 ultrasound slices from 255 patients in which SPLs were visually present were exported. The segmentation performance of the neural network based on the Dice similarity coefficient (DSC), Matthews correlation coefficient (MCC), Jaccard similarity metric (Jaccard), Average Symmetric Surface Distance (ASSD), and Maximum symmetric surface distance (MSSD) was assessed. Our dual-stage boundary-restored network (BRN) outperformed existing segmentation methods (U-Net and a fully convolutional network (FCN)) for the segmentation accuracy parameters including DSC (83.45 ± 16.60%), MCC (0.8330 ± 0.1626), Jaccard (0.7391 ± 0.1770), ASSD (5.68 ± 2.70 mm), and MSSD (15.61 ± 6.07 mm). It also outperformed the original BRN in terms of the DSC by almost 5%. Our results suggest that deep learning algorithms aid fully automated SPL segmentation in patients with SPLs. Further improvement of this technology might improve the specificity of lung cancer screening efforts and could lead to new applications of lung US imaging.
Collapse
|
12
|
Tan J, Jing L, Huo Y, Li L, Akin O, Tian Y. LGAN: Lung segmentation in CT scans using generative adversarial network. Comput Med Imaging Graph 2021; 87:101817. [PMID: 33278767 PMCID: PMC8477299 DOI: 10.1016/j.compmedimag.2020.101817] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 10/13/2020] [Accepted: 10/23/2020] [Indexed: 11/17/2022]
Abstract
Lung segmentation in Computerized Tomography (CT) images plays an important role in various lung disease diagnosis. Most of the current lung segmentation approaches are performed through a series of procedures with manually empirical parameter adjustments in each step. Pursuing an automatic segmentation method with fewer steps, we propose a novel deep learning Generative Adversarial Network (GAN)-based lung segmentation schema, which we denote as LGAN. The proposed schema can be generalized to different kinds of neural networks for lung segmentation in CT images. We evaluated the proposed LGAN schema on datasets including Lung Image Database Consortium image collection (LIDC-IDRI) and Quantitative Imaging Network (QIN) collection with two metrics: segmentation quality and shape similarity. Also, we compared our work with current state-of-the-art methods. The experimental results demonstrated that the proposed LGAN schema can be used as a promising tool for automatic lung segmentation due to its simplified procedure as well as its improved performance and efficiency.
Collapse
Affiliation(s)
- Jiaxing Tan
- The City University of New York, New York 10016, USA
| | - Longlong Jing
- The City University of New York, New York 10016, USA
| | - Yumei Huo
- The City University of New York, New York 10016, USA
| | - Lihong Li
- The City University of New York, New York 10016, USA
| | - Oguz Akin
- Memorial Sloan Kettering Cancer Center, New York 10065, USA
| | - Yingli Tian
- The City University of New York, New York 10016, USA.
| |
Collapse
|
13
|
Yahyatabar M, Jouvet P, Cheriet F. Dense-Unet: a light model for lung fields segmentation in Chest X-Ray images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1242-1245. [PMID: 33018212 DOI: 10.1109/embc44109.2020.9176033] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Automatic and accurate lung segmentation in chest X-ray (CXR) images is fundamental for computer-aided diagnosis systems since the lung is the region of interest in many diseases and also it can reveal useful information by its contours. While deep learning models have reached high performances in the segmentation of anatomical structures, the large number of training parameters is a concern since it increases memory usage and reduces the generalization of the model. To address this, a deep CNN model called Dense-Unet is proposed in which, by dense connectivity between various layers, information flow increases throughout the network. This lets us design a network with significantly fewer parameters while keeping the segmentation robust. To the best of our knowledge, Dense-Unet is the lightest deep model proposed for the segmentation of lung fields in CXR images. The model is evaluated on the JSRT and Montgomery datasets and experiments show that the performance of the proposed model is comparable with state-of-the-art methods.
Collapse
|
14
|
Kholiavchenko M, Sirazitdinov I, Kubrak K, Badrutdinova R, Kuleev R, Yuan Y, Vrtovec T, Ibragimov B. Contour-aware multi-label chest X-ray organ segmentation. Int J Comput Assist Radiol Surg 2020; 15:425-436. [PMID: 32034633 DOI: 10.1007/s11548-019-02115-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 12/30/2019] [Indexed: 11/29/2022]
Abstract
PURPOSE Segmentation of organs from chest X-ray images is an essential task for an accurate and reliable diagnosis of lung diseases and chest organ morphometry. In this study, we investigated the benefits of augmenting state-of-the-art deep convolutional neural networks (CNNs) for image segmentation with organ contour information and evaluated the performance of such augmentation on segmentation of lung fields, heart, and clavicles from chest X-ray images. METHODS Three state-of-the-art CNNs were augmented, namely the UNet and LinkNet architecture with the ResNeXt feature extraction backbone, and the Tiramisu architecture with the DenseNet. All CNN architectures were trained on ground-truth segmentation masks and additionally on the corresponding contours. The contribution of such contour-based augmentation was evaluated against the contour-free architectures, and 20 existing algorithms for lung field segmentation. RESULTS The proposed contour-aware segmentation improved the segmentation performance, and when compared against existing algorithms on the same publicly available database of 247 chest X-ray images, the UNet architecture with the ResNeXt50 encoder combined with the contour-aware approach resulted in the best overall segmentation performance, achieving a Jaccard overlap coefficient of 0.971, 0.933, and 0.903 for the lung fields, heart, and clavicles, respectively. CONCLUSION In this study, we proposed to augment CNN architectures for CXR segmentation with organ contour information and were able to significantly improve segmentation accuracy and outperform all existing solution using a public chest X-ray database.
Collapse
Affiliation(s)
| | | | - K Kubrak
- Innopolis University, Innopolis, Russia
| | | | - R Kuleev
- Innopolis University, Innopolis, Russia
| | - Y Yuan
- Department of Electronic Engineering, City University of Hong Kong, Hong Kong, China
| | - T Vrtovec
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | - B Ibragimov
- Innopolis University, Innopolis, Russia. .,Department of Computer Science, University of Copenhagen, Copenhagen, Denmark.
| |
Collapse
|
15
|
|
16
|
Liu Y, Zhang X, Cai G, Chen Y, Yun Z, Feng Q, Yang W. Automatic delineation of ribs and clavicles in chest radiographs using fully convolutional DenseNets. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 180:105014. [PMID: 31430596 DOI: 10.1016/j.cmpb.2019.105014] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2019] [Revised: 08/04/2019] [Accepted: 08/04/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE In chest radiographs (CXRs), all bones and soft tissues are overlapping with each other, which raises issues for radiologists to read and interpret CXRs. Delineating the ribs and clavicles is helpful for suppressing them from chest radiographs so that their effects can be reduced for chest radiography analysis. However, delineating ribs and clavicles automatically is difficult by methods without deep learning models. Moreover, few of methods without deep learning models can delineate the anterior ribs effectively due to their faint rib edges in the posterior-anterior (PA) CXRs. METHODS In this work, we present an effective deep learning method for delineating posterior ribs, anterior ribs and clavicles automatically using a fully convolutional DenseNet (FC-DenseNet) as pixel classifier. We consider a pixel-weighted loss function to mitigate the uncertainty issue during manually delineating for robust prediction. RESULTS We conduct a comparative analysis with two other fully convolutional networks for edge detection and the state-of-the-art method without deep learning models. The proposed method significantly outperforms these methods in terms of quantitative evaluation metrics and visual perception. The average recall, precision and F-measure are 0.773 ± 0.030, 0.861 ± 0.043 and 0.814 ± 0.023 respectively, and the mean boundary distance (MBD) is 0.855 ± 0.642 pixels of the proposed method on the test dataset. The proposed method also performs well on JSRT and NIH Chest X-ray datasets, indicating its generalizability across multiple databases. Besides, a preliminary result of suppressing the bone components of CXRs has been produced by using our delineating system. CONCLUSIONS The proposed method can automatically delineate ribs and clavicles in CXRs and produce accurate edge maps.
Collapse
Affiliation(s)
- Yunbi Liu
- School of Biomedical Engineering, Southern Medical University, 1023-1063 Shatai South Road, Baiyun District, 510515, Guangzhou, China
| | - Xiao Zhang
- School of Biomedical Engineering, Southern Medical University, 1023-1063 Shatai South Road, Baiyun District, 510515, Guangzhou, China
| | - Guangwei Cai
- School of Biomedical Engineering, Southern Medical University, 1023-1063 Shatai South Road, Baiyun District, 510515, Guangzhou, China
| | - Yingyin Chen
- School of Biomedical Engineering, Southern Medical University, 1023-1063 Shatai South Road, Baiyun District, 510515, Guangzhou, China
| | - Zhaoqiang Yun
- School of Biomedical Engineering, Southern Medical University, 1023-1063 Shatai South Road, Baiyun District, 510515, Guangzhou, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, 1023-1063 Shatai South Road, Baiyun District, 510515, Guangzhou, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, 1023-1063 Shatai South Road, Baiyun District, 510515, Guangzhou, China.
| |
Collapse
|
17
|
Candemir S, Antani S. A review on lung boundary detection in chest X-rays. Int J Comput Assist Radiol Surg 2019; 14:563-576. [PMID: 30730032 PMCID: PMC6420899 DOI: 10.1007/s11548-019-01917-1] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Accepted: 01/16/2019] [Indexed: 01/22/2023]
Abstract
PURPOSE Chest radiography is the most common imaging modality for pulmonary diseases. Due to its wide usage, there is a rich literature addressing automated detection of cardiopulmonary diseases in digital chest X-rays (CXRs). One of the essential steps for automated analysis of CXRs is localizing the relevant region of interest, i.e., isolating lung region from other less relevant parts, for applying decision-making algorithms there. This article provides an overview of the recent literature on lung boundary detection in CXR images. METHODS We review the leading lung segmentation algorithms proposed in period 2006-2017. First, we present a review of articles for posterior-anterior view CXRs. Then, we mention studies which operate on lateral views. We pay particular attention to works that focus their efforts on deformed lungs and pediatric cases. We also highlight the radiographic measures extracted from lung boundary and their use in automatically detecting cardiopulmonary abnormalities. Finally, we identify challenges in dataset curation and expert delineation process, and we listed publicly available CXR datasets. RESULTS (1) We classified algorithms into four categories: rule-based, pixel classification-based, model-based, hybrid, and deep learning-based algorithms. Based on the reviewed articles, hybrid methods and deep learning-based methods surpass the algorithms in other classes and have segmentation performance as good as inter-observer performance. However, they require long training process and pose high computational complexity. (2) We found that most of the algorithms in the literature are evaluated on posterior-anterior view adult CXRs with a healthy lung anatomy appearance without considering challenges in abnormal CXRs. (3) We also found that there are limited studies for pediatric CXRs. The lung appearance in pediatrics, especially in infant cases, deviates from adult lung appearance due to the pediatric development stages. Moreover, pediatric CXRs are noisier than adult CXRs due to interference by other objects, such as someone holding the child's arms or the child's body, and irregular body pose. Therefore, lung boundary detection algorithms developed on adult CXRs may not perform accurately in pediatric cases and need additional constraints suitable for pediatric CXR imaging characteristics. (4) We have also stated that one of the main challenges in medical image analysis is accessing the suitable datasets. We listed benchmark CXR datasets for developing and evaluating the lung boundary algorithms. However, the number of CXR images with reference boundaries is limited due to the cumbersome but necessary process of expert boundary delineation. CONCLUSIONS A reliable computer-aided diagnosis system would need to support a greater variety of lung and background appearance. To our knowledge, algorithms in the literature are evaluated on posterior-anterior view adult CXRs with a healthy lung anatomy appearance, without considering ambiguous lung silhouettes due to pathological deformities, anatomical alterations due to misaligned body positioning, patient's development stage and gross background noises such as holding hands, jewelry, patient's head and legs in CXR. Considering all the challenges which are not very well addressed in the literature, developing lung boundary detection algorithms that are robust to such interference remains a challenging task. We believe that a broad review of lung region detection algorithms would be useful for researchers working in the field of automated detection/diagnosis algorithms for lung/heart pathologies in CXRs.
Collapse
Affiliation(s)
- Sema Candemir
- Lister Hill National Center for Biomedical Communications, Communications Engineering Branch, National Library of Medicine, National Institutes of Health, Bethesda, USA
| | - Sameer Antani
- Lister Hill National Center for Biomedical Communications, Communications Engineering Branch, National Library of Medicine, National Institutes of Health, Bethesda, USA
| |
Collapse
|
18
|
Automatic Segmentation of Ulna and Radius in Forearm Radiographs. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2019; 2019:6490161. [PMID: 30838049 PMCID: PMC6374800 DOI: 10.1155/2019/6490161] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 12/31/2018] [Indexed: 12/01/2022]
Abstract
Automatic segmentation of ulna and radius (UR) in forearm radiographs is a necessary step for single X-ray absorptiometry bone mineral density measurement and diagnosis of osteoporosis. Accurate and robust segmentation of UR is difficult, given the variation in forearms between patients and the nonuniformity intensity in forearm radiographs. In this work, we proposed a practical automatic UR segmentation method through the dynamic programming (DP) algorithm to trace UR contours. Four seed points along four UR diaphysis edges are automatically located in the preprocessed radiographs. Then, the minimum cost paths in a cost map are traced from the seed points through the DP algorithm as UR edges and are merged as the UR contours. The proposed method is quantitatively evaluated using 37 forearm radiographs with manual segmentation results, including 22 normal-exposure and 15 low-exposure radiographs. The average Dice similarity coefficient of our method reached 0.945. The average mean absolute distance between the contours extracted by our method and a radiologist is only 5.04 pixels. The segmentation performance of our method between the normal- and low-exposure radiographs was insignificantly different. Our method was also validated on 105 forearm radiographs acquired under various imaging conditions from several hospitals. The results demonstrated that our method was fairly robust for forearm radiographs of various qualities.
Collapse
|
19
|
Lung boundary detection for chest X-ray images classification based on GLCM and probabilistic neural networks. ACTA ACUST UNITED AC 2019. [DOI: 10.1016/j.procs.2019.09.314] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
20
|
Improving the Segmentation of Anatomical Structures in Chest Radiographs Using U-Net with an ImageNet Pre-trained Encoder. IMAGE ANALYSIS FOR MOVING ORGAN, BREAST, AND THORACIC IMAGES 2018. [DOI: 10.1007/978-3-030-00946-5_17] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|