1
|
Siracusano G, La Corte A, Nucera AG, Gaeta M, Chiappini M, Finocchio G. Effective processing pipeline PACE 2.0 for enhancing chest x-ray contrast and diagnostic interpretability. Sci Rep 2023; 13:22471. [PMID: 38110512 PMCID: PMC10728198 DOI: 10.1038/s41598-023-49534-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 12/09/2023] [Indexed: 12/20/2023] Open
Abstract
Preprocessing is an essential task for the correct analysis of digital medical images. In particular, X-ray imaging might contain artifacts, low contrast, diffractions or intensity inhomogeneities. Recently, we have developed a procedure named PACE that is able to improve chest X-ray (CXR) images including the enforcement of clinical evaluation of pneumonia originated by COVID-19. At the clinical benchmark state of this tool, there have been found some peculiar conditions causing a reduction of details over large bright regions (as in ground-glass opacities and in pleural effusions in bedridden patients) and resulting in oversaturated areas. Here, we have significantly improved the overall performance of the original approach including the results in those specific cases by developing PACE2.0. It combines 2D image decomposition, non-local means denoising, gamma correction, and recursive algorithms to improve image quality. The tool has been evaluated using three metrics: contrast improvement index, information entropy, and effective measure of enhancement, resulting in an average increase of 35% in CII, 7.5% in ENT, 95.6% in EME and 13% in BRISQUE against original radiographies. Additionally, the enhanced images were fed to a pre-trained DenseNet-121 model for transfer learning, resulting in an increase in classification accuracy from 80 to 94% and recall from 89 to 97%, respectively. These improvements led to a potential enhancement of the interpretability of lesion detection in CXRs. PACE2.0 has the potential to become a valuable tool for clinical decision support and could help healthcare professionals detect pneumonia more accurately.
Collapse
Affiliation(s)
- Giulio Siracusano
- Department of Electric, Electronic and Computer Engineering, University of Catania, Viale Andrea Doria 6, 95125, Catania, Italy.
| | - Aurelio La Corte
- Department of Electric, Electronic and Computer Engineering, University of Catania, Viale Andrea Doria 6, 95125, Catania, Italy
| | - Annamaria Giuseppina Nucera
- Unit of Radiology, Department of Advanced Diagnostic-Therapeutic Technologies, "Bianchi-Melacrino-Morelli" Hospital, Reggio Calabria, Via Giuseppe Melacrino, 21, 89124, Reggio Calabria, Italy
| | - Michele Gaeta
- Department of Biomedical Sciences, Dental and of Morphological and Functional Images, University of Messina, Via Consolare Valeria 1, 98125, Messina, Italy
| | - Massimo Chiappini
- Istituto Nazionale di Geofisica e Vulcanologia (INGV), Via di Vigna Murata 605, 00143, Rome, Italy.
- Maris Scarl, Via Vigna Murata 606, 00143, Rome, Italy.
| | - Giovanni Finocchio
- Istituto Nazionale di Geofisica e Vulcanologia (INGV), Via di Vigna Murata 605, 00143, Rome, Italy.
- Department of Mathematical and Computer Sciences, Physical Sciences and Earth Sciences, University of Messina, V.le F. Stagno D'Alcontres 31, 98166, Messina, Italy.
| |
Collapse
|
2
|
Liu Y, Zeng F, Ma M, Zheng B, Yun Z, Qin G, Yang W, Feng Q. Bone suppression of lateral chest x-rays with imperfect and limited dual-energy subtraction images. Comput Med Imaging Graph 2023; 105:102186. [PMID: 36731328 DOI: 10.1016/j.compmedimag.2023.102186] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 01/06/2023] [Accepted: 01/06/2023] [Indexed: 01/22/2023]
Abstract
Bone suppression is to suppress the superimposed bone components over the soft tissues within the lung area of Chest X-ray (CXR), which is potentially useful for the subsequent lung disease diagnosis for radiologists, as well as computer-aided systems. Despite bone suppression methods for frontal CXRs being well studied, it remains challenging for lateral CXRs due to the limited and imperfect DES dataset containing paired lateral CXR and soft-tissue/bone images and more complex anatomical structures in the lateral view. In this work, we propose a bone suppression method for lateral CXRs by leveraging a two-stage distillation learning strategy and a specific data correction method. Specifically, a primary model is first trained on a real DES dataset with limited samples. The bone-suppressed results on a relatively large lateral CXR dataset produced by the primary model are improved by a designed gradient correction method. Secondly, the corrected results serve as training samples to train the distillated model. By automatically learning knowledge from both the primary model and the extra correction procedure, our distillated model is expected to promote the performance of the primary model while omitting the tedious correction procedure. We adopt an ensemble model named MsDd-MAP for the primary and distillated models, which learns the complementary information of Multi-scale and Dual-domain (i.e., intensity and gradient) and fuses them in a maximum-a-posteriori (MAP) framework. Our method is evaluated on a two-exposure lateral DES dataset consisting of 46 subjects and a lateral CXR dataset consisting of 240 subjects. The experimental results suggest that our method is superior to other competing methods regarding the quantitative evaluation metrics. Furthermore, the subjective evaluation by three experienced radiologists also indicates that the distillated model can produce more visually appealing soft-tissue images than the primary model, even comparable to real DES imaging for lateral CXRs.
Collapse
Affiliation(s)
- Yunbi Liu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, Shenzhen, Guangdong 518172, China; Shenzhen Research Institute of Big Data, Shenzhen, China; University of Science and Technology of China, Hefei, China
| | - Fengxia Zeng
- Radiology Department, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Mengwei Ma
- Radiology Department, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Bowen Zheng
- Radiology Department, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Zhaoqiang Yun
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Genggeng Qin
- Radiology Department, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China.
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| |
Collapse
|
3
|
Homayounieh F, Digumarthy SR, Febbo JA, Garrana S, Nitiwarangkul C, Singh R, Khera RD, Gilman M, Kalra MK. Comparison of Baseline, Bone-Subtracted, and Enhanced Chest Radiographs for Detection of Pneumothorax. Can Assoc Radiol J 2021; 72:519-524. [DOI: 10.1177/0846537120908852] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/01/2023] Open
Abstract
Purpose: To assess and compare detectability of pneumothorax on unprocessed baseline, single-energy, bone-subtracted, and enhanced frontal chest radiographs (chest X-ray, CXR). Method and Materials: Our retrospective institutional review board–approved study included 202 patients (mean age 53 ± 24 years; 132 men, 70 women) who underwent frontal CXR and had trace, moderate, large, or tension pneumothorax. All patients (except those with tension pneumothorax) had concurrent chest computed tomography (CT). Two radiologists reviewed the CXR and chest CT for pneumothorax on baseline CXR (ground truth). All baseline CXR were processed to generate bone-subtracted and enhanced images (ClearRead X-ray). Four radiologists (R1-R4) assessed the baseline, bone-subtracted, and enhanced images and recorded the presence of pneumothorax (side, size, and confidence for detection) for each image type. Area under the curve (AUC) was calculated with receiver operating characteristic analyses to determine the accuracy of pneumothorax detection. Results: Bone-subtracted images (AUC: 0.89-0.97) had the lowest accuracy for detection of pneumothorax compared to the baseline (AUC: 0.94-0.97) and enhanced (AUC: 0.96-0.99) radiographs ( P < .01). Most false-positive and false-negative pneumothoraces were detected on the bone-subtracted images and the least numbers on the enhanced radiographs. Highest detection rates and confidence were noted for the enhanced images (empiric AUC for R1-R4 0.96-0.99). Conclusion: Enhanced CXRs are superior to bone-subtracted and unprocessed radiographs for detection of pneumothorax. Clinical Relevance/Application: Enhanced CXRs improve detection of pneumothorax over unprocessed images; bone-subtracted images must be cautiously reviewed to avoid false negatives.
Collapse
Affiliation(s)
- Fatemeh Homayounieh
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Subba R. Digumarthy
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Jennifer A. Febbo
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Sherief Garrana
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Chayanin Nitiwarangkul
- Department of Diagnostic and Therapeutic Radiology, Ramathibodi Hospital, Mahidol University, Bangkok, Thailand
| | - Ramandeep Singh
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Ruhani Doda Khera
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Matthew Gilman
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Mannudeep K. Kalra
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| |
Collapse
|
4
|
Kholiavchenko M, Sirazitdinov I, Kubrak K, Badrutdinova R, Kuleev R, Yuan Y, Vrtovec T, Ibragimov B. Contour-aware multi-label chest X-ray organ segmentation. Int J Comput Assist Radiol Surg 2020; 15:425-436. [PMID: 32034633 DOI: 10.1007/s11548-019-02115-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 12/30/2019] [Indexed: 11/29/2022]
Abstract
PURPOSE Segmentation of organs from chest X-ray images is an essential task for an accurate and reliable diagnosis of lung diseases and chest organ morphometry. In this study, we investigated the benefits of augmenting state-of-the-art deep convolutional neural networks (CNNs) for image segmentation with organ contour information and evaluated the performance of such augmentation on segmentation of lung fields, heart, and clavicles from chest X-ray images. METHODS Three state-of-the-art CNNs were augmented, namely the UNet and LinkNet architecture with the ResNeXt feature extraction backbone, and the Tiramisu architecture with the DenseNet. All CNN architectures were trained on ground-truth segmentation masks and additionally on the corresponding contours. The contribution of such contour-based augmentation was evaluated against the contour-free architectures, and 20 existing algorithms for lung field segmentation. RESULTS The proposed contour-aware segmentation improved the segmentation performance, and when compared against existing algorithms on the same publicly available database of 247 chest X-ray images, the UNet architecture with the ResNeXt50 encoder combined with the contour-aware approach resulted in the best overall segmentation performance, achieving a Jaccard overlap coefficient of 0.971, 0.933, and 0.903 for the lung fields, heart, and clavicles, respectively. CONCLUSION In this study, we proposed to augment CNN architectures for CXR segmentation with organ contour information and were able to significantly improve segmentation accuracy and outperform all existing solution using a public chest X-ray database.
Collapse
Affiliation(s)
| | | | - K Kubrak
- Innopolis University, Innopolis, Russia
| | | | - R Kuleev
- Innopolis University, Innopolis, Russia
| | - Y Yuan
- Department of Electronic Engineering, City University of Hong Kong, Hong Kong, China
| | - T Vrtovec
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | - B Ibragimov
- Innopolis University, Innopolis, Russia. .,Department of Computer Science, University of Copenhagen, Copenhagen, Denmark.
| |
Collapse
|
5
|
Matsubara N, Teramoto A, Saito K, Fujita H. Bone suppression for chest X-ray image using a convolutional neural filter. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2019; 43:10.1007/s13246-019-00822-w. [PMID: 31773501 DOI: 10.1007/s13246-019-00822-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2019] [Accepted: 11/19/2019] [Indexed: 12/22/2022]
Abstract
Chest X-rays are used for mass screening for the early detection of lung cancer. However, lung nodules are often overlooked because of bones overlapping the lung fields. Bone suppression techniques based on artificial intelligence have been developed to solve this problem. However, bone suppression accuracy needs improvement. In this study, we propose a convolutional neural filter (CNF) for bone suppression based on a convolutional neural network which is frequently used in the medical field and has excellent performance in image processing. CNF outputs a value for the bone component of the target pixel by inputting pixel values in the neighborhood of the target pixel. By processing all positions in the input image, a bone-extracted image is generated. Finally, bone-suppressed image is obtained by subtracting the bone-extracted image from the original chest X-ray image. Bone suppression was most accurate when using CNF with six convolutional layers, yielding bone suppression of 89.2%. In addition, abnormalities, if present, were effectively imaged by suppressing only bone components and maintaining soft-tissue. These results suggest that the chances of missing abnormalities may be reduced by using the proposed method. The proposed method is useful for bone suppression in chest X-ray images.
Collapse
Affiliation(s)
- Naoki Matsubara
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan
| | - Atsushi Teramoto
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan.
| | - Kuniaki Saito
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan
| | - Hiroshi Fujita
- Department of Electrical, Electronic & Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu-city, Gifu, 501-1194, Japan
| |
Collapse
|
6
|
Zarshenas A, Liu J, Forti P, Suzuki K. Separation of bones from soft tissue in chest radiographs: Anatomy-specific orientation-frequency-specific deep neural network convolution. Med Phys 2019; 46:2232-2242. [PMID: 30848498 DOI: 10.1002/mp.13468] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Revised: 02/25/2019] [Accepted: 02/27/2019] [Indexed: 01/02/2023] Open
Abstract
PURPOSE Lung nodules that are missed by radiologists as well as by computer-aided detection (CAD) systems mostly overlap with ribs and clavicles. Removing the bony structures would result in better visualization of undetectable lesions. Our purpose in this study was to develop a virtual dual-energy imaging system to separate ribs and clavicles from soft tissue in chest radiographs. METHODS We developed a mixture of anatomy-specific, orientation-frequency-specific (ASOFS) deep neural network convolution (NNC) experts. Anatomy-specific (AS) NNC was designed to separate the bony structures from soft tissue in different lung segments. While an AS design was proposed previously under our massive-training artificial neural networks (MTANN) framework, in this work, we newly mathematically defined an AS experts model as well as its learning and inference strategies in a probabilistic deep-learning framework. In addition, in combination with our AS experts design, we newly proposed the orientation-frequency-specific (OFS) NNC models to decompose bone and soft-tissue structures into specific orientation-frequency components of different scales using a multi-resolution decomposition technique. We trained multiple NNC models, each of which is an expert for a specific orientation-frequency component in a particular anatomic segment. Perfect reconstruction discrete wavelet transform was used for OFS decomposition/reconstruction, while we introduced a soft-gating layer to merge the predictions of AS NNC experts. To train our model, we used the bone images obtained from a dual-energy system as the target (or teaching) images while the standard chest radiographs were used as the input to our model. The training, validation, and test were performed in a nested two-fold cross-validation manner. RESULTS We used a database of 118 chest radiographs with pulmonary nodules to evaluate our NNC scheme. In order to evaluate our scheme, we performed quantitative and qualitative evaluation of the predicted bone and soft-tissue images from our model as well as the ones of a state-of-the-art technique where the "gold-standard" dual-energy bone and soft-tissue images were used as the reference images. Both quantitative and qualitative evaluations demonstrated that our ASOFS NNC was superior to the state-of-the-art bone-suppression technique. Particularly, our scheme was better able to maintain the conspicuity of nodules and lung vessels, comparing to the reference technique, while it separated ribs and clavicles from soft tissue. Comparing to a state-of-the-art bone suppression technique, our bone images had substantially higher (t-test; P < 0.01) similarity, in terms of structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR), to the "gold-standard" dual-energy bone images. CONCLUSIONS Our deep ASOFS NNC scheme can decompose chest radiographs into their bone and soft-tissue images accurately, offering the improved conspicuity of lung nodules and vessels, and therefore would be useful for radiologists as well as CAD systems in detecting lung nodules in chest radiographs.
Collapse
Affiliation(s)
- Amin Zarshenas
- Medical Imaging Research Center & Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, 60616, USA
| | - Junchi Liu
- Medical Imaging Research Center & Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, 60616, USA
| | - Paul Forti
- Medical Imaging Research Center & Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, 60616, USA
| | - Kenji Suzuki
- Medical Imaging Research Center & Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, 60616, USA
| |
Collapse
|
7
|
Lee H, Mansouri M, Tajmir S, Lev MH, Do S. A Deep-Learning System for Fully-Automated Peripherally Inserted Central Catheter (PICC) Tip Detection. J Digit Imaging 2018; 31:393-402. [PMID: 28983851 PMCID: PMC6113157 DOI: 10.1007/s10278-017-0025-z] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
A peripherally inserted central catheter (PICC) is a thin catheter that is inserted via arm veins and threaded near the heart, providing intravenous access. The final catheter tip position is always confirmed on a chest radiograph (CXR) immediately after insertion since malpositioned PICCs can cause potentially life-threatening complications. Although radiologists interpret PICC tip location with high accuracy, delays in interpretation can be significant. In this study, we proposed a fully-automated, deep-learning system with a cascading segmentation AI system containing two fully convolutional neural networks for detecting a PICC line and its tip location. A preprocessing module performed image quality and dimension normalization, and a post-processing module found the PICC tip accurately by pruning false positives. Our best model, trained on 400 training cases and selectively tuned on 50 validation cases, obtained absolute distances from ground truth with a mean of 3.10 mm, a standard deviation of 2.03 mm, and a root mean squares error (RMSE) of 3.71 mm on 150 held-out test cases. This system could help speed confirmation of PICC position and further be generalized to include other types of vascular access and therapeutic support devices.
Collapse
Affiliation(s)
- Hyunkwang Lee
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Mohammad Mansouri
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Shahein Tajmir
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Michael H. Lev
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Synho Do
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| |
Collapse
|
8
|
Suzuki K. Overview of deep learning in medical imaging. Radiol Phys Technol 2017; 10:257-273. [PMID: 28689314 DOI: 10.1007/s12194-017-0406-5] [Citation(s) in RCA: 399] [Impact Index Per Article: 49.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2017] [Accepted: 06/29/2017] [Indexed: 02/07/2023]
Abstract
The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.
Collapse
Affiliation(s)
- Kenji Suzuki
- Medical Imaging Research Center and Department of Electrical and Computer Engineering, Illinois Institute of Technology, 3440 South Dearborn Street, Chicago, IL, 60616, USA. .,World Research Hub Initiative (WRHI), Tokyo Institute of Technology, Tokyo, Japan.
| |
Collapse
|
9
|
Cascade of multi-scale convolutional neural networks for bone suppression of chest radiographs in gradient domain. Med Image Anal 2017; 35:421-433. [PMID: 27589577 DOI: 10.1016/j.media.2016.08.004] [Citation(s) in RCA: 49] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2016] [Revised: 07/24/2016] [Accepted: 08/15/2016] [Indexed: 11/23/2022]
|